00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 117 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3295 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.215 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.215 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.446 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.456 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.467 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.467 > git config core.sparsecheckout # timeout=10 00:00:06.479 > git read-tree -mu HEAD # timeout=10 00:00:06.494 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.510 Commit message: "packer: Add bios builder" 00:00:06.510 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.593 [Pipeline] Start of Pipeline 00:00:06.604 [Pipeline] library 00:00:06.605 Loading library shm_lib@master 00:00:06.606 Library shm_lib@master is cached. Copying from home. 00:00:06.619 [Pipeline] node 00:00:06.659 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.661 [Pipeline] { 00:00:06.673 [Pipeline] catchError 00:00:06.674 [Pipeline] { 00:00:06.684 [Pipeline] wrap 00:00:06.692 [Pipeline] { 00:00:06.697 [Pipeline] stage 00:00:06.699 [Pipeline] { (Prologue) 00:00:06.887 [Pipeline] sh 00:00:07.168 + logger -p user.info -t JENKINS-CI 00:00:07.186 [Pipeline] echo 00:00:07.188 Node: GP11 00:00:07.196 [Pipeline] sh 00:00:07.498 [Pipeline] setCustomBuildProperty 00:00:07.511 [Pipeline] echo 00:00:07.513 Cleanup processes 00:00:07.519 [Pipeline] sh 00:00:07.805 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.805 3736554 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.820 [Pipeline] sh 00:00:08.106 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.106 ++ grep -v 'sudo pgrep' 00:00:08.106 ++ awk '{print $1}' 00:00:08.106 + sudo kill -9 00:00:08.106 + true 00:00:08.122 [Pipeline] cleanWs 00:00:08.132 [WS-CLEANUP] Deleting project workspace... 00:00:08.132 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.139 [WS-CLEANUP] done 00:00:08.143 [Pipeline] setCustomBuildProperty 00:00:08.155 [Pipeline] sh 00:00:08.437 + sudo git config --global --replace-all safe.directory '*' 00:00:08.531 [Pipeline] httpRequest 00:00:08.568 [Pipeline] echo 00:00:08.569 Sorcerer 10.211.164.101 is alive 00:00:08.576 [Pipeline] httpRequest 00:00:08.580 HttpMethod: GET 00:00:08.581 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.581 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.596 Response Code: HTTP/1.1 200 OK 00:00:08.596 Success: Status code 200 is in the accepted range: 200,404 00:00:08.597 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:16.299 [Pipeline] sh 00:00:16.585 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:16.602 [Pipeline] httpRequest 00:00:16.632 [Pipeline] echo 00:00:16.634 Sorcerer 10.211.164.101 is alive 00:00:16.642 [Pipeline] httpRequest 00:00:16.647 HttpMethod: GET 00:00:16.648 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:16.649 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:16.665 Response Code: HTTP/1.1 200 OK 00:00:16.666 Success: Status code 200 is in the accepted range: 200,404 00:00:16.666 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:05.495 [Pipeline] sh 00:01:05.777 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:09.075 [Pipeline] sh 00:01:09.361 + git -C spdk log --oneline -n5 00:01:09.361 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:09.361 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:09.361 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:09.361 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:09.361 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:09.382 [Pipeline] withCredentials 00:01:09.394 > git --version # timeout=10 00:01:09.406 > git --version # 'git version 2.39.2' 00:01:09.424 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:09.427 [Pipeline] { 00:01:09.436 [Pipeline] retry 00:01:09.439 [Pipeline] { 00:01:09.457 [Pipeline] sh 00:01:09.741 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:12.305 [Pipeline] } 00:01:12.327 [Pipeline] // retry 00:01:12.333 [Pipeline] } 00:01:12.354 [Pipeline] // withCredentials 00:01:12.364 [Pipeline] httpRequest 00:01:12.385 [Pipeline] echo 00:01:12.386 Sorcerer 10.211.164.101 is alive 00:01:12.398 [Pipeline] httpRequest 00:01:12.403 HttpMethod: GET 00:01:12.403 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.404 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.407 Response Code: HTTP/1.1 200 OK 00:01:12.407 Success: Status code 200 is in the accepted range: 200,404 00:01:12.408 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:18.354 [Pipeline] sh 00:01:18.684 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:20.601 [Pipeline] sh 00:01:20.883 + git -C dpdk log --oneline -n5 00:01:20.883 eeb0605f11 version: 23.11.0 00:01:20.883 238778122a doc: update release notes for 23.11 00:01:20.883 46aa6b3cfc doc: fix description of RSS features 00:01:20.883 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:20.883 7e421ae345 devtools: support skipping forbid rule check 00:01:20.893 [Pipeline] } 00:01:20.912 [Pipeline] // stage 00:01:20.921 [Pipeline] stage 00:01:20.924 [Pipeline] { (Prepare) 00:01:20.943 [Pipeline] writeFile 00:01:20.960 [Pipeline] sh 00:01:21.243 + logger -p user.info -t JENKINS-CI 00:01:21.256 [Pipeline] sh 00:01:21.540 + logger -p user.info -t JENKINS-CI 00:01:21.551 [Pipeline] sh 00:01:21.834 + cat autorun-spdk.conf 00:01:21.834 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.834 SPDK_TEST_NVMF=1 00:01:21.834 SPDK_TEST_NVME_CLI=1 00:01:21.834 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.834 SPDK_TEST_NVMF_NICS=e810 00:01:21.834 SPDK_TEST_VFIOUSER=1 00:01:21.834 SPDK_RUN_UBSAN=1 00:01:21.834 NET_TYPE=phy 00:01:21.834 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:21.834 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.842 RUN_NIGHTLY=1 00:01:21.846 [Pipeline] readFile 00:01:21.871 [Pipeline] withEnv 00:01:21.873 [Pipeline] { 00:01:21.887 [Pipeline] sh 00:01:22.172 + set -ex 00:01:22.172 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.172 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.172 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.172 ++ SPDK_TEST_NVMF=1 00:01:22.172 ++ SPDK_TEST_NVME_CLI=1 00:01:22.172 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.172 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.172 ++ SPDK_TEST_VFIOUSER=1 00:01:22.172 ++ SPDK_RUN_UBSAN=1 00:01:22.172 ++ NET_TYPE=phy 00:01:22.172 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.172 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.172 ++ RUN_NIGHTLY=1 00:01:22.172 + case $SPDK_TEST_NVMF_NICS in 00:01:22.172 + DRIVERS=ice 00:01:22.172 + [[ tcp == \r\d\m\a ]] 00:01:22.172 + [[ -n ice ]] 00:01:22.172 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.172 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:22.172 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:22.172 rmmod: ERROR: Module irdma is not currently loaded 00:01:22.172 rmmod: ERROR: Module i40iw is not currently loaded 00:01:22.172 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:22.172 + true 00:01:22.172 + for D in $DRIVERS 00:01:22.172 + sudo modprobe ice 00:01:22.172 + exit 0 00:01:22.181 [Pipeline] } 00:01:22.198 [Pipeline] // withEnv 00:01:22.204 [Pipeline] } 00:01:22.220 [Pipeline] // stage 00:01:22.228 [Pipeline] catchError 00:01:22.230 [Pipeline] { 00:01:22.245 [Pipeline] timeout 00:01:22.246 Timeout set to expire in 50 min 00:01:22.247 [Pipeline] { 00:01:22.263 [Pipeline] stage 00:01:22.265 [Pipeline] { (Tests) 00:01:22.279 [Pipeline] sh 00:01:22.562 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.562 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.562 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.562 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:22.562 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.562 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.562 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:22.562 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.562 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.562 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.562 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:22.562 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.562 + source /etc/os-release 00:01:22.562 ++ NAME='Fedora Linux' 00:01:22.562 ++ VERSION='38 (Cloud Edition)' 00:01:22.562 ++ ID=fedora 00:01:22.562 ++ VERSION_ID=38 00:01:22.562 ++ VERSION_CODENAME= 00:01:22.562 ++ PLATFORM_ID=platform:f38 00:01:22.562 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:22.562 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.562 ++ LOGO=fedora-logo-icon 00:01:22.562 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:22.562 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.562 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:22.562 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.562 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.562 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.562 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:22.562 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.562 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:22.562 ++ SUPPORT_END=2024-05-14 00:01:22.562 ++ VARIANT='Cloud Edition' 00:01:22.562 ++ VARIANT_ID=cloud 00:01:22.562 + uname -a 00:01:22.562 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:22.562 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.497 Hugepages 00:01:23.497 node hugesize free / total 00:01:23.497 node0 1048576kB 0 / 0 00:01:23.497 node0 2048kB 0 / 0 00:01:23.497 node1 1048576kB 0 / 0 00:01:23.497 node1 2048kB 0 / 0 00:01:23.497 00:01:23.497 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.497 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:23.756 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:23.756 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:23.756 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:23.756 + rm -f /tmp/spdk-ld-path 00:01:23.756 + source autorun-spdk.conf 00:01:23.756 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.756 ++ SPDK_TEST_NVMF=1 00:01:23.756 ++ SPDK_TEST_NVME_CLI=1 00:01:23.756 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.756 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.756 ++ SPDK_TEST_VFIOUSER=1 00:01:23.756 ++ SPDK_RUN_UBSAN=1 00:01:23.756 ++ NET_TYPE=phy 00:01:23.756 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:23.756 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.756 ++ RUN_NIGHTLY=1 00:01:23.756 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.756 + [[ -n '' ]] 00:01:23.756 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.756 + for M in /var/spdk/build-*-manifest.txt 00:01:23.756 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.756 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.756 + for M in /var/spdk/build-*-manifest.txt 00:01:23.756 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.756 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.756 ++ uname 00:01:23.756 + [[ Linux == \L\i\n\u\x ]] 00:01:23.756 + sudo dmesg -T 00:01:23.756 + sudo dmesg --clear 00:01:23.756 + dmesg_pid=3737257 00:01:23.756 + [[ Fedora Linux == FreeBSD ]] 00:01:23.756 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.756 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.756 + sudo dmesg -Tw 00:01:23.756 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.756 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.756 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.756 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.756 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.756 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.756 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.756 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.756 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.756 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.756 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.756 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.756 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.756 Test configuration: 00:01:23.756 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.756 SPDK_TEST_NVMF=1 00:01:23.756 SPDK_TEST_NVME_CLI=1 00:01:23.756 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.756 SPDK_TEST_NVMF_NICS=e810 00:01:23.756 SPDK_TEST_VFIOUSER=1 00:01:23.756 SPDK_RUN_UBSAN=1 00:01:23.756 NET_TYPE=phy 00:01:23.756 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:23.756 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.756 RUN_NIGHTLY=1 19:31:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.756 19:31:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.756 19:31:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.756 19:31:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.756 19:31:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.756 19:31:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.756 19:31:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.756 19:31:33 -- paths/export.sh@5 -- $ export PATH 00:01:23.756 19:31:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.756 19:31:33 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.756 19:31:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:23.756 19:31:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721928693.XXXXXX 00:01:23.756 19:31:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721928693.XGWbrc 00:01:23.756 19:31:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:23.756 19:31:33 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:23.756 19:31:33 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.756 19:31:33 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:23.756 19:31:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.756 19:31:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.756 19:31:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:23.756 19:31:33 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:23.756 19:31:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.756 19:31:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:23.756 19:31:33 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:01:23.757 19:31:33 -- pm/common@17 -- $ local monitor 00:01:23.757 19:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.757 19:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.757 19:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.757 19:31:33 -- pm/common@21 -- $ date +%s 00:01:23.757 19:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.757 19:31:33 -- pm/common@21 -- $ date +%s 00:01:23.757 19:31:33 -- pm/common@25 -- $ sleep 1 00:01:23.757 19:31:33 -- pm/common@21 -- $ date +%s 00:01:23.757 19:31:33 -- pm/common@21 -- $ date +%s 00:01:23.757 19:31:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721928693 00:01:23.757 19:31:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721928693 00:01:23.757 19:31:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721928693 00:01:23.757 19:31:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721928693 00:01:24.016 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721928693_collect-vmstat.pm.log 00:01:24.016 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721928693_collect-cpu-load.pm.log 00:01:24.016 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721928693_collect-cpu-temp.pm.log 00:01:24.016 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721928693_collect-bmc-pm.bmc.pm.log 00:01:24.960 19:31:34 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:01:24.960 19:31:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.960 19:31:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.960 19:31:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.960 19:31:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.960 Thu Jul 25 05:31:34 PM UTC 2024 00:01:24.960 19:31:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.960 v24.05-15-g241d0f3c9 00:01:24.960 19:31:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.960 19:31:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.960 19:31:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.960 19:31:34 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:24.960 19:31:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:24.960 19:31:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.960 ************************************ 00:01:24.960 START TEST ubsan 00:01:24.960 ************************************ 00:01:24.960 19:31:34 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:24.960 using ubsan 00:01:24.960 00:01:24.960 real 0m0.000s 00:01:24.960 user 0m0.000s 00:01:24.960 sys 0m0.000s 00:01:24.961 19:31:34 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:24.961 19:31:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.961 ************************************ 00:01:24.961 END TEST ubsan 00:01:24.961 ************************************ 00:01:24.961 19:31:34 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:24.961 19:31:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:24.961 19:31:34 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:24.961 19:31:34 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:24.961 19:31:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:24.961 19:31:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.961 ************************************ 00:01:24.961 START TEST build_native_dpdk 00:01:24.961 ************************************ 00:01:24.961 19:31:34 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:24.961 eeb0605f11 version: 23.11.0 00:01:24.961 238778122a doc: update release notes for 23.11 00:01:24.961 46aa6b3cfc doc: fix description of RSS features 00:01:24.961 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:24.961 7e421ae345 devtools: support skipping forbid rule check 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:24.961 patching file config/rte_config.h 00:01:24.961 Hunk #1 succeeded at 60 (offset 1 line). 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:24.961 19:31:34 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:24.961 patching file lib/pcapng/rte_pcapng.c 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:24.961 19:31:34 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.150 The Meson build system 00:01:29.150 Version: 1.3.1 00:01:29.150 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.150 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:29.150 Build type: native build 00:01:29.150 Program cat found: YES (/usr/bin/cat) 00:01:29.150 Project name: DPDK 00:01:29.150 Project version: 23.11.0 00:01:29.150 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.150 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:29.150 Host machine cpu family: x86_64 00:01:29.150 Host machine cpu: x86_64 00:01:29.150 Message: ## Building in Developer Mode ## 00:01:29.150 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.150 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:29.150 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.150 Program python3 found: YES (/usr/bin/python3) 00:01:29.150 Program cat found: YES (/usr/bin/cat) 00:01:29.150 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:29.150 Compiler for C supports arguments -march=native: YES 00:01:29.150 Checking for size of "void *" : 8 00:01:29.150 Checking for size of "void *" : 8 (cached) 00:01:29.150 Library m found: YES 00:01:29.150 Library numa found: YES 00:01:29.150 Has header "numaif.h" : YES 00:01:29.150 Library fdt found: NO 00:01:29.150 Library execinfo found: NO 00:01:29.150 Has header "execinfo.h" : YES 00:01:29.150 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.150 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.150 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.150 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.150 Run-time dependency openssl found: YES 3.0.9 00:01:29.150 Run-time dependency libpcap found: YES 1.10.4 00:01:29.150 Has header "pcap.h" with dependency libpcap: YES 00:01:29.150 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.150 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.150 Compiler for C supports arguments -Wformat: YES 00:01:29.150 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.150 Compiler for C supports arguments -Wformat-security: NO 00:01:29.150 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.150 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.150 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.150 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.151 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.151 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.151 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.151 Compiler for C supports arguments -Wundef: YES 00:01:29.151 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.151 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.151 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.151 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.151 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.151 Program objdump found: YES (/usr/bin/objdump) 00:01:29.151 Compiler for C supports arguments -mavx512f: YES 00:01:29.151 Checking if "AVX512 checking" compiles: YES 00:01:29.151 Fetching value of define "__SSE4_2__" : 1 00:01:29.151 Fetching value of define "__AES__" : 1 00:01:29.151 Fetching value of define "__AVX__" : 1 00:01:29.151 Fetching value of define "__AVX2__" : (undefined) 00:01:29.151 Fetching value of define "__AVX512BW__" : (undefined) 00:01:29.151 Fetching value of define "__AVX512CD__" : (undefined) 00:01:29.151 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:29.151 Fetching value of define "__AVX512F__" : (undefined) 00:01:29.151 Fetching value of define "__AVX512VL__" : (undefined) 00:01:29.151 Fetching value of define "__PCLMUL__" : 1 00:01:29.151 Fetching value of define "__RDRND__" : 1 00:01:29.151 Fetching value of define "__RDSEED__" : (undefined) 00:01:29.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:29.151 Fetching value of define "__znver1__" : (undefined) 00:01:29.151 Fetching value of define "__znver2__" : (undefined) 00:01:29.151 Fetching value of define "__znver3__" : (undefined) 00:01:29.151 Fetching value of define "__znver4__" : (undefined) 00:01:29.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.151 Message: lib/log: Defining dependency "log" 00:01:29.151 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.151 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.151 Checking for function "getentropy" : NO 00:01:29.151 Message: lib/eal: Defining dependency "eal" 00:01:29.151 Message: lib/ring: Defining dependency "ring" 00:01:29.151 Message: lib/rcu: Defining dependency "rcu" 00:01:29.151 Message: lib/mempool: Defining dependency "mempool" 00:01:29.151 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.151 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.151 Compiler for C supports arguments -mpclmul: YES 00:01:29.151 Compiler for C supports arguments -maes: YES 00:01:29.151 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.151 Compiler for C supports arguments -mavx512bw: YES 00:01:29.151 Compiler for C supports arguments -mavx512dq: YES 00:01:29.151 Compiler for C supports arguments -mavx512vl: YES 00:01:29.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.151 Compiler for C supports arguments -mavx2: YES 00:01:29.151 Compiler for C supports arguments -mavx: YES 00:01:29.151 Message: lib/net: Defining dependency "net" 00:01:29.151 Message: lib/meter: Defining dependency "meter" 00:01:29.151 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.151 Message: lib/pci: Defining dependency "pci" 00:01:29.151 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.151 Message: lib/metrics: Defining dependency "metrics" 00:01:29.151 Message: lib/hash: Defining dependency "hash" 00:01:29.151 Message: lib/timer: Defining dependency "timer" 00:01:29.151 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.151 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:29.151 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:29.151 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:29.151 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:29.151 Message: lib/acl: Defining dependency "acl" 00:01:29.151 Message: lib/bbdev: Defining dependency "bbdev" 00:01:29.151 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:29.151 Run-time dependency libelf found: YES 0.190 00:01:29.151 Message: lib/bpf: Defining dependency "bpf" 00:01:29.151 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:29.151 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.151 Message: lib/distributor: Defining dependency "distributor" 00:01:29.151 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.151 Message: lib/efd: Defining dependency "efd" 00:01:29.151 Message: lib/eventdev: Defining dependency "eventdev" 00:01:29.151 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:29.151 Message: lib/gpudev: Defining dependency "gpudev" 00:01:29.151 Message: lib/gro: Defining dependency "gro" 00:01:29.151 Message: lib/gso: Defining dependency "gso" 00:01:29.151 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:29.151 Message: lib/jobstats: Defining dependency "jobstats" 00:01:29.151 Message: lib/latencystats: Defining dependency "latencystats" 00:01:29.151 Message: lib/lpm: Defining dependency "lpm" 00:01:29.151 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.151 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.151 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:29.151 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:29.151 Message: lib/member: Defining dependency "member" 00:01:29.151 Message: lib/pcapng: Defining dependency "pcapng" 00:01:29.151 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.151 Message: lib/power: Defining dependency "power" 00:01:29.151 Message: lib/rawdev: Defining dependency "rawdev" 00:01:29.151 Message: lib/regexdev: Defining dependency "regexdev" 00:01:29.151 Message: lib/mldev: Defining dependency "mldev" 00:01:29.151 Message: lib/rib: Defining dependency "rib" 00:01:29.151 Message: lib/reorder: Defining dependency "reorder" 00:01:29.151 Message: lib/sched: Defining dependency "sched" 00:01:29.151 Message: lib/security: Defining dependency "security" 00:01:29.151 Message: lib/stack: Defining dependency "stack" 00:01:29.151 Has header "linux/userfaultfd.h" : YES 00:01:29.151 Has header "linux/vduse.h" : YES 00:01:29.151 Message: lib/vhost: Defining dependency "vhost" 00:01:29.151 Message: lib/ipsec: Defining dependency "ipsec" 00:01:29.151 Message: lib/pdcp: Defining dependency "pdcp" 00:01:29.151 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.151 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.151 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:29.151 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.151 Message: lib/fib: Defining dependency "fib" 00:01:29.151 Message: lib/port: Defining dependency "port" 00:01:29.151 Message: lib/pdump: Defining dependency "pdump" 00:01:29.151 Message: lib/table: Defining dependency "table" 00:01:29.151 Message: lib/pipeline: Defining dependency "pipeline" 00:01:29.151 Message: lib/graph: Defining dependency "graph" 00:01:29.151 Message: lib/node: Defining dependency "node" 00:01:31.060 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:31.060 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.060 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.060 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.060 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:31.060 Compiler for C supports arguments -Wno-unused-value: YES 00:01:31.060 Compiler for C supports arguments -Wno-format: YES 00:01:31.060 Compiler for C supports arguments -Wno-format-security: YES 00:01:31.060 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:31.060 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:31.060 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:31.060 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:31.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.060 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.060 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:31.060 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:31.060 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:31.060 Has header "sys/epoll.h" : YES 00:01:31.060 Program doxygen found: YES (/usr/bin/doxygen) 00:01:31.060 Configuring doxy-api-html.conf using configuration 00:01:31.060 Configuring doxy-api-man.conf using configuration 00:01:31.060 Program mandb found: YES (/usr/bin/mandb) 00:01:31.060 Program sphinx-build found: NO 00:01:31.060 Configuring rte_build_config.h using configuration 00:01:31.060 Message: 00:01:31.060 ================= 00:01:31.060 Applications Enabled 00:01:31.060 ================= 00:01:31.060 00:01:31.060 apps: 00:01:31.060 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:31.060 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:31.060 test-pmd, test-regex, test-sad, test-security-perf, 00:01:31.060 00:01:31.060 Message: 00:01:31.060 ================= 00:01:31.060 Libraries Enabled 00:01:31.060 ================= 00:01:31.060 00:01:31.060 libs: 00:01:31.060 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.060 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:31.060 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:31.060 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:31.060 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:31.060 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:31.060 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:31.060 00:01:31.060 00:01:31.060 Message: 00:01:31.060 =============== 00:01:31.060 Drivers Enabled 00:01:31.060 =============== 00:01:31.060 00:01:31.060 common: 00:01:31.060 00:01:31.060 bus: 00:01:31.060 pci, vdev, 00:01:31.060 mempool: 00:01:31.060 ring, 00:01:31.060 dma: 00:01:31.060 00:01:31.060 net: 00:01:31.060 i40e, 00:01:31.060 raw: 00:01:31.060 00:01:31.060 crypto: 00:01:31.060 00:01:31.060 compress: 00:01:31.060 00:01:31.060 regex: 00:01:31.060 00:01:31.060 ml: 00:01:31.060 00:01:31.060 vdpa: 00:01:31.060 00:01:31.060 event: 00:01:31.060 00:01:31.060 baseband: 00:01:31.060 00:01:31.060 gpu: 00:01:31.060 00:01:31.060 00:01:31.060 Message: 00:01:31.060 ================= 00:01:31.060 Content Skipped 00:01:31.060 ================= 00:01:31.060 00:01:31.060 apps: 00:01:31.060 00:01:31.060 libs: 00:01:31.060 00:01:31.060 drivers: 00:01:31.060 common/cpt: not in enabled drivers build config 00:01:31.060 common/dpaax: not in enabled drivers build config 00:01:31.060 common/iavf: not in enabled drivers build config 00:01:31.060 common/idpf: not in enabled drivers build config 00:01:31.060 common/mvep: not in enabled drivers build config 00:01:31.060 common/octeontx: not in enabled drivers build config 00:01:31.060 bus/auxiliary: not in enabled drivers build config 00:01:31.060 bus/cdx: not in enabled drivers build config 00:01:31.060 bus/dpaa: not in enabled drivers build config 00:01:31.060 bus/fslmc: not in enabled drivers build config 00:01:31.060 bus/ifpga: not in enabled drivers build config 00:01:31.060 bus/platform: not in enabled drivers build config 00:01:31.060 bus/vmbus: not in enabled drivers build config 00:01:31.060 common/cnxk: not in enabled drivers build config 00:01:31.060 common/mlx5: not in enabled drivers build config 00:01:31.060 common/nfp: not in enabled drivers build config 00:01:31.060 common/qat: not in enabled drivers build config 00:01:31.060 common/sfc_efx: not in enabled drivers build config 00:01:31.060 mempool/bucket: not in enabled drivers build config 00:01:31.060 mempool/cnxk: not in enabled drivers build config 00:01:31.060 mempool/dpaa: not in enabled drivers build config 00:01:31.060 mempool/dpaa2: not in enabled drivers build config 00:01:31.060 mempool/octeontx: not in enabled drivers build config 00:01:31.060 mempool/stack: not in enabled drivers build config 00:01:31.060 dma/cnxk: not in enabled drivers build config 00:01:31.060 dma/dpaa: not in enabled drivers build config 00:01:31.060 dma/dpaa2: not in enabled drivers build config 00:01:31.060 dma/hisilicon: not in enabled drivers build config 00:01:31.060 dma/idxd: not in enabled drivers build config 00:01:31.060 dma/ioat: not in enabled drivers build config 00:01:31.060 dma/skeleton: not in enabled drivers build config 00:01:31.060 net/af_packet: not in enabled drivers build config 00:01:31.060 net/af_xdp: not in enabled drivers build config 00:01:31.060 net/ark: not in enabled drivers build config 00:01:31.060 net/atlantic: not in enabled drivers build config 00:01:31.060 net/avp: not in enabled drivers build config 00:01:31.060 net/axgbe: not in enabled drivers build config 00:01:31.060 net/bnx2x: not in enabled drivers build config 00:01:31.060 net/bnxt: not in enabled drivers build config 00:01:31.060 net/bonding: not in enabled drivers build config 00:01:31.060 net/cnxk: not in enabled drivers build config 00:01:31.060 net/cpfl: not in enabled drivers build config 00:01:31.060 net/cxgbe: not in enabled drivers build config 00:01:31.060 net/dpaa: not in enabled drivers build config 00:01:31.060 net/dpaa2: not in enabled drivers build config 00:01:31.060 net/e1000: not in enabled drivers build config 00:01:31.060 net/ena: not in enabled drivers build config 00:01:31.060 net/enetc: not in enabled drivers build config 00:01:31.060 net/enetfec: not in enabled drivers build config 00:01:31.060 net/enic: not in enabled drivers build config 00:01:31.060 net/failsafe: not in enabled drivers build config 00:01:31.060 net/fm10k: not in enabled drivers build config 00:01:31.060 net/gve: not in enabled drivers build config 00:01:31.060 net/hinic: not in enabled drivers build config 00:01:31.060 net/hns3: not in enabled drivers build config 00:01:31.060 net/iavf: not in enabled drivers build config 00:01:31.060 net/ice: not in enabled drivers build config 00:01:31.060 net/idpf: not in enabled drivers build config 00:01:31.060 net/igc: not in enabled drivers build config 00:01:31.060 net/ionic: not in enabled drivers build config 00:01:31.060 net/ipn3ke: not in enabled drivers build config 00:01:31.060 net/ixgbe: not in enabled drivers build config 00:01:31.060 net/mana: not in enabled drivers build config 00:01:31.060 net/memif: not in enabled drivers build config 00:01:31.060 net/mlx4: not in enabled drivers build config 00:01:31.060 net/mlx5: not in enabled drivers build config 00:01:31.060 net/mvneta: not in enabled drivers build config 00:01:31.060 net/mvpp2: not in enabled drivers build config 00:01:31.060 net/netvsc: not in enabled drivers build config 00:01:31.060 net/nfb: not in enabled drivers build config 00:01:31.060 net/nfp: not in enabled drivers build config 00:01:31.060 net/ngbe: not in enabled drivers build config 00:01:31.060 net/null: not in enabled drivers build config 00:01:31.060 net/octeontx: not in enabled drivers build config 00:01:31.060 net/octeon_ep: not in enabled drivers build config 00:01:31.060 net/pcap: not in enabled drivers build config 00:01:31.060 net/pfe: not in enabled drivers build config 00:01:31.060 net/qede: not in enabled drivers build config 00:01:31.061 net/ring: not in enabled drivers build config 00:01:31.061 net/sfc: not in enabled drivers build config 00:01:31.061 net/softnic: not in enabled drivers build config 00:01:31.061 net/tap: not in enabled drivers build config 00:01:31.061 net/thunderx: not in enabled drivers build config 00:01:31.061 net/txgbe: not in enabled drivers build config 00:01:31.061 net/vdev_netvsc: not in enabled drivers build config 00:01:31.061 net/vhost: not in enabled drivers build config 00:01:31.061 net/virtio: not in enabled drivers build config 00:01:31.061 net/vmxnet3: not in enabled drivers build config 00:01:31.061 raw/cnxk_bphy: not in enabled drivers build config 00:01:31.061 raw/cnxk_gpio: not in enabled drivers build config 00:01:31.061 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:31.061 raw/ifpga: not in enabled drivers build config 00:01:31.061 raw/ntb: not in enabled drivers build config 00:01:31.061 raw/skeleton: not in enabled drivers build config 00:01:31.061 crypto/armv8: not in enabled drivers build config 00:01:31.061 crypto/bcmfs: not in enabled drivers build config 00:01:31.061 crypto/caam_jr: not in enabled drivers build config 00:01:31.061 crypto/ccp: not in enabled drivers build config 00:01:31.061 crypto/cnxk: not in enabled drivers build config 00:01:31.061 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.061 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.061 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.061 crypto/mlx5: not in enabled drivers build config 00:01:31.061 crypto/mvsam: not in enabled drivers build config 00:01:31.061 crypto/nitrox: not in enabled drivers build config 00:01:31.061 crypto/null: not in enabled drivers build config 00:01:31.061 crypto/octeontx: not in enabled drivers build config 00:01:31.061 crypto/openssl: not in enabled drivers build config 00:01:31.061 crypto/scheduler: not in enabled drivers build config 00:01:31.061 crypto/uadk: not in enabled drivers build config 00:01:31.061 crypto/virtio: not in enabled drivers build config 00:01:31.061 compress/isal: not in enabled drivers build config 00:01:31.061 compress/mlx5: not in enabled drivers build config 00:01:31.061 compress/octeontx: not in enabled drivers build config 00:01:31.061 compress/zlib: not in enabled drivers build config 00:01:31.061 regex/mlx5: not in enabled drivers build config 00:01:31.061 regex/cn9k: not in enabled drivers build config 00:01:31.061 ml/cnxk: not in enabled drivers build config 00:01:31.061 vdpa/ifc: not in enabled drivers build config 00:01:31.061 vdpa/mlx5: not in enabled drivers build config 00:01:31.061 vdpa/nfp: not in enabled drivers build config 00:01:31.061 vdpa/sfc: not in enabled drivers build config 00:01:31.061 event/cnxk: not in enabled drivers build config 00:01:31.061 event/dlb2: not in enabled drivers build config 00:01:31.061 event/dpaa: not in enabled drivers build config 00:01:31.061 event/dpaa2: not in enabled drivers build config 00:01:31.061 event/dsw: not in enabled drivers build config 00:01:31.061 event/opdl: not in enabled drivers build config 00:01:31.061 event/skeleton: not in enabled drivers build config 00:01:31.061 event/sw: not in enabled drivers build config 00:01:31.061 event/octeontx: not in enabled drivers build config 00:01:31.061 baseband/acc: not in enabled drivers build config 00:01:31.061 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:31.061 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:31.061 baseband/la12xx: not in enabled drivers build config 00:01:31.061 baseband/null: not in enabled drivers build config 00:01:31.061 baseband/turbo_sw: not in enabled drivers build config 00:01:31.061 gpu/cuda: not in enabled drivers build config 00:01:31.061 00:01:31.061 00:01:31.061 Build targets in project: 220 00:01:31.061 00:01:31.061 DPDK 23.11.0 00:01:31.061 00:01:31.061 User defined options 00:01:31.061 libdir : lib 00:01:31.061 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.061 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:31.061 c_link_args : 00:01:31.061 enable_docs : false 00:01:31.061 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:31.061 enable_kmods : false 00:01:31.061 machine : native 00:01:31.061 tests : false 00:01:31.061 00:01:31.061 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.061 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:31.061 19:31:40 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:31.061 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:31.061 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.061 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.061 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.061 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.061 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.061 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.061 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.061 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.061 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.061 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.061 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.061 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.061 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.061 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.061 [15/710] Linking static target lib/librte_kvargs.a 00:01:31.061 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.061 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.324 [18/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.324 [19/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.324 [20/710] Linking static target lib/librte_log.a 00:01:31.324 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.583 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.844 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.844 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.844 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.844 [26/710] Linking target lib/librte_log.so.24.0 00:01:32.109 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.109 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.109 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.109 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.109 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.109 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.109 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.109 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.109 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.109 [36/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.109 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:32.109 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.109 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.109 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.109 [41/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.109 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.109 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.109 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.109 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.109 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.109 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.109 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.109 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.109 [50/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:32.109 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.109 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.109 [53/710] Linking target lib/librte_kvargs.so.24.0 00:01:32.109 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.368 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.368 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.368 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.368 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.368 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.368 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.368 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.368 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.368 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.368 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.368 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:32.632 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.632 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.632 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.632 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.632 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.891 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:32.891 [72/710] Linking static target lib/librte_pci.a 00:01:32.891 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.891 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:32.891 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.156 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:33.156 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:33.156 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:33.156 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:33.156 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:33.156 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:33.156 [82/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.156 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:33.156 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:33.156 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:33.156 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:33.156 [87/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:33.156 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:33.156 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:33.156 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:33.156 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:33.156 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:33.414 [93/710] Linking static target lib/librte_ring.a 00:01:33.414 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:33.414 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:33.414 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:33.414 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:33.414 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:33.414 [99/710] Linking static target lib/librte_meter.a 00:01:33.414 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:33.415 [101/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:33.415 [102/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:33.415 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.415 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:33.415 [105/710] Linking static target lib/librte_telemetry.a 00:01:33.677 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.677 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:33.677 [108/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:33.677 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:33.677 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:33.677 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:33.677 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:33.677 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:33.677 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.677 [115/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.939 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:33.939 [117/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:33.939 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:33.939 [119/710] Linking static target lib/librte_eal.a 00:01:33.939 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:33.939 [121/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.939 [122/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.939 [123/710] Linking static target lib/librte_net.a 00:01:33.939 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:33.939 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:34.200 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:34.200 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:34.200 [128/710] Linking static target lib/librte_mempool.a 00:01:34.200 [129/710] Linking static target lib/librte_cmdline.a 00:01:34.200 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.200 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:34.200 [132/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:34.200 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:34.464 [134/710] Linking static target lib/librte_cfgfile.a 00:01:34.464 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.464 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:34.464 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:34.464 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:34.465 [139/710] Linking static target lib/librte_metrics.a 00:01:34.465 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:34.465 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:34.465 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:34.465 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:34.728 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:34.728 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:34.728 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:34.728 [147/710] Linking static target lib/librte_bitratestats.a 00:01:34.728 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:34.728 [149/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:34.728 [150/710] Linking static target lib/librte_rcu.a 00:01:34.728 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:34.728 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:34.997 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.997 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:34.997 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:34.997 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.997 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:34.997 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:34.997 [159/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.997 [160/710] Linking static target lib/librte_timer.a 00:01:34.997 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:34.997 [162/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:34.997 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.256 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.256 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.256 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:35.256 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:35.256 [168/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.256 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:35.256 [170/710] Linking static target lib/librte_bbdev.a 00:01:35.519 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.519 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.519 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.519 [174/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:35.519 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.519 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.780 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.780 [178/710] Linking static target lib/librte_compressdev.a 00:01:35.780 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:35.780 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:35.780 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:36.045 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:36.045 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:36.045 [184/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.045 [185/710] Linking static target lib/librte_distributor.a 00:01:36.045 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:36.303 [187/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.303 [188/710] Linking static target lib/librte_dmadev.a 00:01:36.304 [189/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.304 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:36.304 [191/710] Linking static target lib/librte_bpf.a 00:01:36.304 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:36.566 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:36.566 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.566 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:36.566 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:36.566 [197/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.566 [198/710] Linking static target lib/librte_dispatcher.a 00:01:36.566 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:36.566 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:36.566 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:36.566 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:36.827 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:36.827 [204/710] Linking static target lib/librte_gpudev.a 00:01:36.827 [205/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:36.827 [206/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:36.827 [207/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:36.827 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:36.827 [209/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:36.827 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:36.827 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.827 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:36.827 [213/710] Linking static target lib/librte_gro.a 00:01:36.827 [214/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:36.827 [215/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.827 [216/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.090 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:37.090 [218/710] Linking static target lib/librte_jobstats.a 00:01:37.090 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:37.090 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:37.354 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:37.354 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.354 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.354 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:37.354 [225/710] Linking static target lib/librte_latencystats.a 00:01:37.354 [226/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.622 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:37.622 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:37.622 [229/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:37.622 [230/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:37.622 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:37.622 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:37.622 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:37.622 [234/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.622 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:37.884 [236/710] Linking static target lib/librte_ip_frag.a 00:01:37.884 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:37.884 [238/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:37.884 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.884 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.884 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:38.145 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.145 [243/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.145 [244/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:38.145 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:38.145 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.409 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:38.409 [248/710] Linking static target lib/librte_gso.a 00:01:38.409 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:38.410 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:38.410 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.410 [252/710] Linking static target lib/librte_regexdev.a 00:01:38.410 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:38.410 [254/710] Linking static target lib/librte_rawdev.a 00:01:38.410 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:38.410 [256/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:38.410 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:38.676 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:38.676 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.676 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:38.676 [261/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:38.676 [262/710] Linking static target lib/librte_mldev.a 00:01:38.676 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:38.676 [264/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:38.676 [265/710] Linking static target lib/librte_pcapng.a 00:01:38.959 [266/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:38.959 [267/710] Linking static target lib/librte_efd.a 00:01:38.959 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:38.959 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:38.959 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:38.959 [271/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:38.959 [272/710] Linking static target lib/librte_stack.a 00:01:38.959 [273/710] Linking static target lib/acl/libavx2_tmp.a 00:01:38.959 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:38.959 [275/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:38.959 [276/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.959 [277/710] Linking static target lib/librte_lpm.a 00:01:38.959 [278/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.223 [279/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.223 [280/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.223 [281/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:39.223 [282/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.223 [283/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.223 [284/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.223 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.223 [286/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.223 [287/710] Linking static target lib/librte_hash.a 00:01:39.487 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.487 [289/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:39.487 [290/710] Linking static target lib/acl/libavx512_tmp.a 00:01:39.487 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.487 [292/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.487 [293/710] Linking static target lib/librte_acl.a 00:01:39.487 [294/710] Linking static target lib/librte_reorder.a 00:01:39.487 [295/710] Linking static target lib/librte_power.a 00:01:39.487 [296/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.487 [297/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.487 [298/710] Linking static target lib/librte_security.a 00:01:39.487 [299/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.756 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.756 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:39.756 [302/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.756 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.756 [304/710] Linking static target lib/librte_mbuf.a 00:01:40.023 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.023 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.023 [307/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:40.023 [308/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [309/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:40.023 [311/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:40.023 [312/710] Linking static target lib/librte_rib.a 00:01:40.023 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:40.023 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:40.023 [315/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:40.283 [317/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:40.283 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.283 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:40.283 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:40.283 [321/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:40.283 [322/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:40.283 [323/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:40.283 [324/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:40.548 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:40.548 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.811 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.811 [328/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.811 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.811 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:40.811 [331/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.811 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:41.077 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:41.077 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:41.077 [335/710] Linking static target lib/librte_eventdev.a 00:01:41.077 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:41.337 [337/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:41.337 [338/710] Linking static target lib/librte_member.a 00:01:41.337 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.337 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.337 [341/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:41.337 [342/710] Linking static target lib/librte_cryptodev.a 00:01:41.337 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:41.599 [344/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:41.599 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:41.599 [346/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:41.599 [347/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:41.599 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:41.599 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:41.599 [350/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:41.599 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:41.599 [352/710] Linking static target lib/librte_sched.a 00:01:41.599 [353/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:41.599 [354/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.861 [355/710] Linking static target lib/librte_ethdev.a 00:01:41.861 [356/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:41.861 [357/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.861 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:41.861 [359/710] Linking static target lib/librte_fib.a 00:01:41.861 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:41.861 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:41.861 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:42.128 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:42.128 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:42.128 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:42.128 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:42.128 [367/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:42.393 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:42.393 [369/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.393 [370/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.393 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:42.393 [372/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.393 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:42.656 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:42.656 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:42.656 [376/710] Linking static target lib/librte_pdump.a 00:01:42.920 [377/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:42.920 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:42.920 [379/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:42.920 [380/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:42.920 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:42.920 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:42.920 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:42.920 [384/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.920 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:42.920 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.920 [387/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:42.920 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:43.187 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:43.187 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.187 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:43.187 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:43.187 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:43.187 [394/710] Linking static target lib/librte_ipsec.a 00:01:43.449 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:43.449 [396/710] Linking static target lib/librte_table.a 00:01:43.449 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.449 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:43.711 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:43.711 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:43.711 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.976 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:43.976 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:43.976 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:44.240 [405/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.240 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:44.240 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:44.240 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.240 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.240 [410/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:44.240 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.514 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.514 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.514 [414/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.514 [415/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:44.514 [416/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.514 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:44.793 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.793 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.793 [420/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.793 [421/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.793 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.793 [423/710] Linking static target drivers/librte_bus_vdev.a 00:01:44.793 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:44.793 [425/710] Linking static target lib/librte_port.a 00:01:44.793 [426/710] Linking target lib/librte_eal.so.24.0 00:01:44.793 [427/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:44.793 [428/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.061 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:45.061 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.061 [431/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.061 [432/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:45.061 [433/710] Linking static target drivers/librte_bus_pci.a 00:01:45.061 [434/710] Linking target lib/librte_ring.so.24.0 00:01:45.326 [435/710] Linking target lib/librte_meter.so.24.0 00:01:45.326 [436/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:45.326 [437/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.326 [438/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:45.326 [439/710] Linking target lib/librte_pci.so.24.0 00:01:45.326 [440/710] Linking target lib/librte_timer.so.24.0 00:01:45.326 [441/710] Linking target lib/librte_acl.so.24.0 00:01:45.326 [442/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:45.326 [443/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:45.326 [444/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:45.326 [445/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:45.595 [446/710] Linking target lib/librte_cfgfile.so.24.0 00:01:45.595 [447/710] Linking target lib/librte_dmadev.so.24.0 00:01:45.595 [448/710] Linking target lib/librte_rcu.so.24.0 00:01:45.595 [449/710] Linking target lib/librte_mempool.so.24.0 00:01:45.595 [450/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:45.595 [451/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:45.595 [452/710] Linking target lib/librte_jobstats.so.24.0 00:01:45.595 [453/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:45.595 [454/710] Linking static target lib/librte_graph.a 00:01:45.595 [455/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:45.595 [456/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:45.595 [457/710] Linking target lib/librte_rawdev.so.24.0 00:01:45.595 [458/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.595 [459/710] Linking target lib/librte_stack.so.24.0 00:01:45.595 [460/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.595 [461/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:45.595 [462/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:45.595 [463/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:45.595 [464/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.858 [465/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:45.858 [466/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:45.858 [467/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:45.858 [468/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:45.858 [469/710] Linking target lib/librte_mbuf.so.24.0 00:01:45.858 [470/710] Linking target lib/librte_rib.so.24.0 00:01:45.858 [471/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:45.858 [472/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:45.858 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.126 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.126 [475/710] Linking static target drivers/librte_mempool_ring.a 00:01:46.126 [476/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:46.127 [477/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.127 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:46.127 [479/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:46.127 [480/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:46.127 [481/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:46.127 [482/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:46.127 [483/710] Linking target lib/librte_fib.so.24.0 00:01:46.127 [484/710] Linking target lib/librte_net.so.24.0 00:01:46.127 [485/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:46.127 [486/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:46.127 [487/710] Linking target lib/librte_bbdev.so.24.0 00:01:46.127 [488/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:46.127 [489/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:46.127 [490/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:46.392 [491/710] Linking target lib/librte_compressdev.so.24.0 00:01:46.392 [492/710] Linking target lib/librte_distributor.so.24.0 00:01:46.392 [493/710] Linking target lib/librte_cryptodev.so.24.0 00:01:46.392 [494/710] Linking target lib/librte_gpudev.so.24.0 00:01:46.392 [495/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:46.392 [496/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:46.392 [497/710] Linking target lib/librte_mldev.so.24.0 00:01:46.392 [498/710] Linking target lib/librte_regexdev.so.24.0 00:01:46.392 [499/710] Linking target lib/librte_reorder.so.24.0 00:01:46.392 [500/710] Linking target lib/librte_sched.so.24.0 00:01:46.392 [501/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:46.392 [502/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.392 [503/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:46.392 [504/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:46.392 [505/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:46.392 [506/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:46.656 [507/710] Linking target lib/librte_cmdline.so.24.0 00:01:46.656 [508/710] Linking target lib/librte_hash.so.24.0 00:01:46.656 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:46.656 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:46.656 [511/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.656 [512/710] Linking target lib/librte_security.so.24.0 00:01:46.656 [513/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:46.656 [514/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:46.656 [515/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:46.918 [516/710] Linking target lib/librte_efd.so.24.0 00:01:46.918 [517/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:46.918 [518/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:46.918 [519/710] Linking target lib/librte_lpm.so.24.0 00:01:46.918 [520/710] Linking target lib/librte_member.so.24.0 00:01:46.918 [521/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:46.918 [522/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:46.918 [523/710] Linking target lib/librte_ipsec.so.24.0 00:01:47.182 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:47.182 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:47.182 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:47.182 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:47.182 [528/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:47.182 [529/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:47.448 [530/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:47.448 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:47.448 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:47.712 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:47.712 [534/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:47.712 [535/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:47.712 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:47.712 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:47.712 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:47.712 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:47.712 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:47.978 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:48.242 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:48.242 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:48.505 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:48.505 [545/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:48.505 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:48.505 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:48.505 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:48.505 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:48.505 [550/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:48.505 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:48.505 [552/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:48.765 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:48.765 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:48.765 [555/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:48.765 [556/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:48.765 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:48.765 [558/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:49.027 [559/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:49.290 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:49.557 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:49.557 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:49.819 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:49.819 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:49.819 [565/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.819 [566/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:49.819 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:49.819 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:49.819 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:49.819 [570/710] Linking target lib/librte_ethdev.so.24.0 00:01:50.082 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:50.082 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:50.082 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:50.082 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:50.082 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:50.082 [576/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:50.344 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:50.344 [578/710] Linking target lib/librte_metrics.so.24.0 00:01:50.344 [579/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:50.344 [580/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:50.344 [581/710] Linking target lib/librte_bpf.so.24.0 00:01:50.344 [582/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:50.344 [583/710] Linking target lib/librte_gro.so.24.0 00:01:50.344 [584/710] Linking target lib/librte_eventdev.so.24.0 00:01:50.344 [585/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:50.344 [586/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:50.344 [587/710] Linking target lib/librte_gso.so.24.0 00:01:50.344 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:50.607 [589/710] Linking target lib/librte_ip_frag.so.24.0 00:01:50.607 [590/710] Linking static target lib/librte_pdcp.a 00:01:50.607 [591/710] Linking target lib/librte_pcapng.so.24.0 00:01:50.607 [592/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:50.607 [593/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:50.607 [594/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:50.607 [595/710] Linking target lib/librte_power.so.24.0 00:01:50.607 [596/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:50.607 [597/710] Linking target lib/librte_latencystats.so.24.0 00:01:50.607 [598/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:50.607 [599/710] Linking target lib/librte_bitratestats.so.24.0 00:01:50.607 [600/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:50.607 [601/710] Linking target lib/librte_dispatcher.so.24.0 00:01:50.608 [602/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:50.608 [603/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:50.866 [604/710] Linking target lib/librte_pdump.so.24.0 00:01:50.866 [605/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:50.866 [606/710] Linking target lib/librte_port.so.24.0 00:01:50.866 [607/710] Linking target lib/librte_graph.so.24.0 00:01:50.866 [608/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:50.866 [609/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:50.866 [610/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:51.132 [611/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.132 [612/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:51.132 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:51.132 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:51.132 [615/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:51.132 [616/710] Linking target lib/librte_pdcp.so.24.0 00:01:51.132 [617/710] Linking target lib/librte_table.so.24.0 00:01:51.132 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:51.392 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:51.392 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:51.392 [621/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:51.392 [622/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:51.653 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:51.653 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:51.653 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:51.653 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:51.653 [627/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:51.919 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:51.919 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:51.919 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:52.178 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:52.178 [632/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:52.178 [633/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:52.178 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:52.437 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:52.437 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:52.437 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:52.437 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:52.437 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:52.696 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:52.696 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:52.696 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:52.696 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:52.954 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:52.954 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:52.954 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:52.954 [647/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:52.955 [648/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:53.212 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:53.212 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.212 [651/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:53.470 [652/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:53.470 [653/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:53.470 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:53.470 [655/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:53.470 [656/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:53.728 [657/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:53.728 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:53.987 [659/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:53.987 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:53.987 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:53.987 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:53.987 [663/710] Linking static target drivers/librte_net_i40e.a 00:01:53.987 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:54.245 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:54.503 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:54.503 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:54.503 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.761 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:54.761 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:55.018 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:55.275 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:55.275 [673/710] Linking static target lib/librte_node.a 00:01:55.275 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:55.533 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.533 [676/710] Linking target lib/librte_node.so.24.0 00:01:56.908 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:56.908 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:56.908 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:58.303 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:58.869 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:05.454 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.513 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.513 [684/710] Linking static target lib/librte_vhost.a 00:02:37.513 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.513 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:49.730 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:49.730 [688/710] Linking static target lib/librte_pipeline.a 00:02:49.730 [689/710] Linking target app/dpdk-dumpcap 00:02:49.730 [690/710] Linking target app/dpdk-proc-info 00:02:49.730 [691/710] Linking target app/dpdk-pdump 00:02:49.730 [692/710] Linking target app/dpdk-test-cmdline 00:02:49.730 [693/710] Linking target app/dpdk-graph 00:02:49.730 [694/710] Linking target app/dpdk-test-regex 00:02:49.730 [695/710] Linking target app/dpdk-test-sad 00:02:49.730 [696/710] Linking target app/dpdk-test-fib 00:02:49.730 [697/710] Linking target app/dpdk-test-gpudev 00:02:49.730 [698/710] Linking target app/dpdk-test-acl 00:02:49.730 [699/710] Linking target app/dpdk-test-security-perf 00:02:49.730 [700/710] Linking target app/dpdk-test-flow-perf 00:02:49.730 [701/710] Linking target app/dpdk-test-pipeline 00:02:49.730 [702/710] Linking target app/dpdk-test-mldev 00:02:49.730 [703/710] Linking target app/dpdk-test-dma-perf 00:02:49.730 [704/710] Linking target app/dpdk-test-compress-perf 00:02:49.730 [705/710] Linking target app/dpdk-test-bbdev 00:02:49.730 [706/710] Linking target app/dpdk-test-crypto-perf 00:02:49.730 [707/710] Linking target app/dpdk-test-eventdev 00:02:49.730 [708/710] Linking target app/dpdk-testpmd 00:02:50.665 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.665 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:50.924 19:33:00 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:50.924 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:50.924 [0/1] Installing files. 00:02:51.185 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:51.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:51.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:51.192 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.192 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.762 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.762 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.762 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.762 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.762 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.762 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:52.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:52.027 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:52.027 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:52.027 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:52.027 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:52.027 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:52.027 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:52.027 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:52.027 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:52.027 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:52.027 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:52.027 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:52.027 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:52.027 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:52.027 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:52.027 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:52.027 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:52.027 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:52.027 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:52.027 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:52.027 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:52.027 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:52.027 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:52.027 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:52.027 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:52.027 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:52.027 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:52.027 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:52.027 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:52.027 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:52.027 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:52.027 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:52.027 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:52.027 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:52.027 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:52.027 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:52.027 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:52.027 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:52.027 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:52.027 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:52.027 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:52.027 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:52.027 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:52.027 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:52.027 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:52.027 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:52.028 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:52.028 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:52.028 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:52.028 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:52.028 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:52.028 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:52.028 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:52.028 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:52.028 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:52.028 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:52.028 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:52.028 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:52.028 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:52.028 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:52.028 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:52.028 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:52.028 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:52.028 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:52.028 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:52.028 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:52.028 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:52.028 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:52.028 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:52.028 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:52.028 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:52.028 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:52.028 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:52.028 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:52.028 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:52.028 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:52.028 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:52.028 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:52.028 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:52.028 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:52.028 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:52.028 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:52.028 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:52.028 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:52.028 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:52.028 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:52.028 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:52.028 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:52.028 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:52.028 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:52.028 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:52.028 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:52.028 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:52.028 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:52.028 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:52.028 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:52.028 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:52.028 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:52.028 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:52.028 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:52.028 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:52.028 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:52.028 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:52.028 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:52.028 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:52.028 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:52.028 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:52.028 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:52.028 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:52.028 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:52.028 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:52.028 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:52.028 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:52.028 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:52.028 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:52.028 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:52.028 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:52.028 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:52.028 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:52.028 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:52.028 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:52.028 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:52.028 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:52.028 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:52.028 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:52.028 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:52.028 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:52.028 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:52.028 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:52.028 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:52.028 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:52.028 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:52.028 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:52.028 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:52.028 19:33:01 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:52.028 19:33:01 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.028 19:33:01 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:52.028 19:33:01 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.028 00:02:52.028 real 1m26.986s 00:02:52.028 user 17m58.612s 00:02:52.028 sys 2m5.791s 00:02:52.028 19:33:01 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:52.029 19:33:01 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:52.029 ************************************ 00:02:52.029 END TEST build_native_dpdk 00:02:52.029 ************************************ 00:02:52.029 19:33:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:52.029 19:33:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:52.029 19:33:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:52.029 19:33:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:52.029 19:33:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:52.029 19:33:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:52.029 19:33:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:52.029 19:33:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:52.029 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:52.029 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.029 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.287 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:52.546 Using 'verbs' RDMA provider 00:03:03.188 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:11.302 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:11.560 Creating mk/config.mk...done. 00:03:11.560 Creating mk/cc.flags.mk...done. 00:03:11.560 Type 'make' to build. 00:03:11.560 19:33:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:11.560 19:33:20 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:11.560 19:33:20 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:11.560 19:33:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.560 ************************************ 00:03:11.560 START TEST make 00:03:11.560 ************************************ 00:03:11.560 19:33:20 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:11.819 make[1]: Nothing to be done for 'all'. 00:03:13.205 The Meson build system 00:03:13.205 Version: 1.3.1 00:03:13.205 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:13.205 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.205 Build type: native build 00:03:13.205 Project name: libvfio-user 00:03:13.205 Project version: 0.0.1 00:03:13.205 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:13.205 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:13.205 Host machine cpu family: x86_64 00:03:13.205 Host machine cpu: x86_64 00:03:13.205 Run-time dependency threads found: YES 00:03:13.205 Library dl found: YES 00:03:13.205 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:13.205 Run-time dependency json-c found: YES 0.17 00:03:13.205 Run-time dependency cmocka found: YES 1.1.7 00:03:13.205 Program pytest-3 found: NO 00:03:13.205 Program flake8 found: NO 00:03:13.205 Program misspell-fixer found: NO 00:03:13.205 Program restructuredtext-lint found: NO 00:03:13.205 Program valgrind found: YES (/usr/bin/valgrind) 00:03:13.205 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.205 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.205 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.205 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.205 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:13.205 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:13.205 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.205 Build targets in project: 8 00:03:13.205 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:13.205 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:13.205 00:03:13.205 libvfio-user 0.0.1 00:03:13.205 00:03:13.205 User defined options 00:03:13.205 buildtype : debug 00:03:13.205 default_library: shared 00:03:13.205 libdir : /usr/local/lib 00:03:13.205 00:03:13.205 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.160 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.160 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:14.160 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:14.160 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:14.160 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:14.160 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:14.160 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:14.422 [7/37] Compiling C object samples/null.p/null.c.o 00:03:14.422 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:14.422 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:14.422 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:14.422 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:14.422 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:14.422 [13/37] Compiling C object samples/server.p/server.c.o 00:03:14.422 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:14.422 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:14.422 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:14.422 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:14.422 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:14.422 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:14.422 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:14.422 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:14.422 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:14.422 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:14.422 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:14.422 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:14.422 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:14.422 [27/37] Compiling C object samples/client.p/client.c.o 00:03:14.682 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:14.682 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:14.682 [30/37] Linking target samples/client 00:03:14.682 [31/37] Linking target test/unit_tests 00:03:14.682 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:14.945 [33/37] Linking target samples/server 00:03:14.945 [34/37] Linking target samples/null 00:03:14.945 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:14.945 [36/37] Linking target samples/gpio-pci-idio-16 00:03:14.945 [37/37] Linking target samples/lspci 00:03:14.945 INFO: autodetecting backend as ninja 00:03:14.945 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.946 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.517 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.517 ninja: no work to do. 00:03:27.716 CC lib/ut_mock/mock.o 00:03:27.716 CC lib/ut/ut.o 00:03:27.716 CC lib/log/log.o 00:03:27.716 CC lib/log/log_flags.o 00:03:27.716 CC lib/log/log_deprecated.o 00:03:27.716 LIB libspdk_log.a 00:03:27.716 LIB libspdk_ut.a 00:03:27.716 LIB libspdk_ut_mock.a 00:03:27.716 SO libspdk_ut.so.2.0 00:03:27.716 SO libspdk_ut_mock.so.6.0 00:03:27.716 SO libspdk_log.so.7.0 00:03:27.716 SYMLINK libspdk_ut.so 00:03:27.716 SYMLINK libspdk_ut_mock.so 00:03:27.716 SYMLINK libspdk_log.so 00:03:27.717 CXX lib/trace_parser/trace.o 00:03:27.717 CC lib/ioat/ioat.o 00:03:27.717 CC lib/dma/dma.o 00:03:27.717 CC lib/util/base64.o 00:03:27.717 CC lib/util/bit_array.o 00:03:27.717 CC lib/util/cpuset.o 00:03:27.717 CC lib/util/crc16.o 00:03:27.717 CC lib/util/crc32.o 00:03:27.717 CC lib/util/crc32c.o 00:03:27.717 CC lib/util/crc32_ieee.o 00:03:27.717 CC lib/util/crc64.o 00:03:27.717 CC lib/util/dif.o 00:03:27.717 CC lib/util/fd.o 00:03:27.717 CC lib/util/file.o 00:03:27.717 CC lib/util/hexlify.o 00:03:27.717 CC lib/util/iov.o 00:03:27.717 CC lib/util/math.o 00:03:27.717 CC lib/util/pipe.o 00:03:27.717 CC lib/util/strerror_tls.o 00:03:27.717 CC lib/util/string.o 00:03:27.717 CC lib/util/uuid.o 00:03:27.717 CC lib/util/fd_group.o 00:03:27.717 CC lib/util/xor.o 00:03:27.717 CC lib/util/zipf.o 00:03:27.717 CC lib/vfio_user/host/vfio_user_pci.o 00:03:27.717 CC lib/vfio_user/host/vfio_user.o 00:03:27.717 LIB libspdk_dma.a 00:03:27.717 SO libspdk_dma.so.4.0 00:03:27.717 LIB libspdk_ioat.a 00:03:27.717 SO libspdk_ioat.so.7.0 00:03:27.717 SYMLINK libspdk_dma.so 00:03:27.717 LIB libspdk_vfio_user.a 00:03:27.717 SYMLINK libspdk_ioat.so 00:03:27.717 SO libspdk_vfio_user.so.5.0 00:03:27.975 SYMLINK libspdk_vfio_user.so 00:03:27.975 LIB libspdk_util.a 00:03:27.975 SO libspdk_util.so.9.0 00:03:28.233 SYMLINK libspdk_util.so 00:03:28.233 CC lib/conf/conf.o 00:03:28.233 CC lib/vmd/vmd.o 00:03:28.233 CC lib/rdma/common.o 00:03:28.491 CC lib/idxd/idxd.o 00:03:28.491 CC lib/env_dpdk/env.o 00:03:28.491 CC lib/json/json_parse.o 00:03:28.491 CC lib/vmd/led.o 00:03:28.491 CC lib/rdma/rdma_verbs.o 00:03:28.491 CC lib/idxd/idxd_user.o 00:03:28.491 CC lib/json/json_util.o 00:03:28.491 CC lib/env_dpdk/memory.o 00:03:28.491 CC lib/idxd/idxd_kernel.o 00:03:28.491 CC lib/json/json_write.o 00:03:28.491 CC lib/env_dpdk/pci.o 00:03:28.491 CC lib/env_dpdk/init.o 00:03:28.491 CC lib/env_dpdk/threads.o 00:03:28.491 CC lib/env_dpdk/pci_ioat.o 00:03:28.492 CC lib/env_dpdk/pci_virtio.o 00:03:28.492 CC lib/env_dpdk/pci_vmd.o 00:03:28.492 CC lib/env_dpdk/pci_idxd.o 00:03:28.492 CC lib/env_dpdk/pci_event.o 00:03:28.492 CC lib/env_dpdk/sigbus_handler.o 00:03:28.492 CC lib/env_dpdk/pci_dpdk.o 00:03:28.492 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:28.492 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:28.492 LIB libspdk_trace_parser.a 00:03:28.492 SO libspdk_trace_parser.so.5.0 00:03:28.492 SYMLINK libspdk_trace_parser.so 00:03:28.750 LIB libspdk_conf.a 00:03:28.750 SO libspdk_conf.so.6.0 00:03:28.750 SYMLINK libspdk_conf.so 00:03:28.750 LIB libspdk_json.a 00:03:28.750 SO libspdk_json.so.6.0 00:03:28.750 LIB libspdk_rdma.a 00:03:28.750 SO libspdk_rdma.so.6.0 00:03:28.750 SYMLINK libspdk_json.so 00:03:28.750 SYMLINK libspdk_rdma.so 00:03:29.008 LIB libspdk_idxd.a 00:03:29.008 CC lib/jsonrpc/jsonrpc_server.o 00:03:29.008 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:29.008 CC lib/jsonrpc/jsonrpc_client.o 00:03:29.008 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:29.008 SO libspdk_idxd.so.12.0 00:03:29.008 SYMLINK libspdk_idxd.so 00:03:29.008 LIB libspdk_vmd.a 00:03:29.008 SO libspdk_vmd.so.6.0 00:03:29.008 SYMLINK libspdk_vmd.so 00:03:29.267 LIB libspdk_jsonrpc.a 00:03:29.267 SO libspdk_jsonrpc.so.6.0 00:03:29.267 SYMLINK libspdk_jsonrpc.so 00:03:29.525 CC lib/rpc/rpc.o 00:03:29.783 LIB libspdk_rpc.a 00:03:29.783 SO libspdk_rpc.so.6.0 00:03:29.783 SYMLINK libspdk_rpc.so 00:03:30.041 CC lib/keyring/keyring.o 00:03:30.041 CC lib/keyring/keyring_rpc.o 00:03:30.041 CC lib/trace/trace.o 00:03:30.041 CC lib/notify/notify.o 00:03:30.041 CC lib/trace/trace_flags.o 00:03:30.041 CC lib/notify/notify_rpc.o 00:03:30.041 CC lib/trace/trace_rpc.o 00:03:30.041 LIB libspdk_notify.a 00:03:30.041 SO libspdk_notify.so.6.0 00:03:30.300 LIB libspdk_keyring.a 00:03:30.300 SYMLINK libspdk_notify.so 00:03:30.300 LIB libspdk_trace.a 00:03:30.300 SO libspdk_keyring.so.1.0 00:03:30.300 SO libspdk_trace.so.10.0 00:03:30.300 SYMLINK libspdk_keyring.so 00:03:30.300 SYMLINK libspdk_trace.so 00:03:30.300 LIB libspdk_env_dpdk.a 00:03:30.559 CC lib/thread/thread.o 00:03:30.559 CC lib/thread/iobuf.o 00:03:30.559 SO libspdk_env_dpdk.so.14.0 00:03:30.559 CC lib/sock/sock.o 00:03:30.559 CC lib/sock/sock_rpc.o 00:03:30.559 SYMLINK libspdk_env_dpdk.so 00:03:30.817 LIB libspdk_sock.a 00:03:30.817 SO libspdk_sock.so.9.0 00:03:31.076 SYMLINK libspdk_sock.so 00:03:31.076 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:31.076 CC lib/nvme/nvme_ctrlr.o 00:03:31.076 CC lib/nvme/nvme_fabric.o 00:03:31.076 CC lib/nvme/nvme_ns_cmd.o 00:03:31.076 CC lib/nvme/nvme_ns.o 00:03:31.076 CC lib/nvme/nvme_pcie_common.o 00:03:31.076 CC lib/nvme/nvme_pcie.o 00:03:31.076 CC lib/nvme/nvme_qpair.o 00:03:31.076 CC lib/nvme/nvme.o 00:03:31.076 CC lib/nvme/nvme_quirks.o 00:03:31.076 CC lib/nvme/nvme_transport.o 00:03:31.076 CC lib/nvme/nvme_discovery.o 00:03:31.076 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:31.076 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:31.076 CC lib/nvme/nvme_tcp.o 00:03:31.076 CC lib/nvme/nvme_opal.o 00:03:31.076 CC lib/nvme/nvme_io_msg.o 00:03:31.076 CC lib/nvme/nvme_poll_group.o 00:03:31.076 CC lib/nvme/nvme_zns.o 00:03:31.076 CC lib/nvme/nvme_stubs.o 00:03:31.076 CC lib/nvme/nvme_auth.o 00:03:31.076 CC lib/nvme/nvme_cuse.o 00:03:31.076 CC lib/nvme/nvme_vfio_user.o 00:03:31.076 CC lib/nvme/nvme_rdma.o 00:03:32.014 LIB libspdk_thread.a 00:03:32.014 SO libspdk_thread.so.10.0 00:03:32.272 SYMLINK libspdk_thread.so 00:03:32.272 CC lib/vfu_tgt/tgt_endpoint.o 00:03:32.272 CC lib/accel/accel.o 00:03:32.272 CC lib/init/json_config.o 00:03:32.272 CC lib/accel/accel_rpc.o 00:03:32.272 CC lib/vfu_tgt/tgt_rpc.o 00:03:32.272 CC lib/virtio/virtio.o 00:03:32.272 CC lib/blob/blobstore.o 00:03:32.272 CC lib/init/subsystem.o 00:03:32.272 CC lib/accel/accel_sw.o 00:03:32.272 CC lib/virtio/virtio_vhost_user.o 00:03:32.272 CC lib/blob/request.o 00:03:32.272 CC lib/init/subsystem_rpc.o 00:03:32.272 CC lib/virtio/virtio_vfio_user.o 00:03:32.272 CC lib/blob/zeroes.o 00:03:32.272 CC lib/init/rpc.o 00:03:32.272 CC lib/virtio/virtio_pci.o 00:03:32.272 CC lib/blob/blob_bs_dev.o 00:03:32.530 LIB libspdk_init.a 00:03:32.530 SO libspdk_init.so.5.0 00:03:32.789 LIB libspdk_virtio.a 00:03:32.789 LIB libspdk_vfu_tgt.a 00:03:32.789 SYMLINK libspdk_init.so 00:03:32.789 SO libspdk_virtio.so.7.0 00:03:32.789 SO libspdk_vfu_tgt.so.3.0 00:03:32.789 SYMLINK libspdk_vfu_tgt.so 00:03:32.789 SYMLINK libspdk_virtio.so 00:03:32.789 CC lib/event/app.o 00:03:32.789 CC lib/event/reactor.o 00:03:32.789 CC lib/event/log_rpc.o 00:03:32.789 CC lib/event/app_rpc.o 00:03:32.789 CC lib/event/scheduler_static.o 00:03:33.355 LIB libspdk_event.a 00:03:33.355 SO libspdk_event.so.13.0 00:03:33.355 SYMLINK libspdk_event.so 00:03:33.355 LIB libspdk_accel.a 00:03:33.355 SO libspdk_accel.so.15.0 00:03:33.355 SYMLINK libspdk_accel.so 00:03:33.613 CC lib/bdev/bdev.o 00:03:33.613 CC lib/bdev/bdev_rpc.o 00:03:33.613 CC lib/bdev/bdev_zone.o 00:03:33.613 CC lib/bdev/part.o 00:03:33.613 CC lib/bdev/scsi_nvme.o 00:03:33.613 LIB libspdk_nvme.a 00:03:33.871 SO libspdk_nvme.so.13.0 00:03:34.129 SYMLINK libspdk_nvme.so 00:03:35.503 LIB libspdk_blob.a 00:03:35.503 SO libspdk_blob.so.11.0 00:03:35.503 SYMLINK libspdk_blob.so 00:03:35.761 CC lib/lvol/lvol.o 00:03:35.761 CC lib/blobfs/blobfs.o 00:03:35.761 CC lib/blobfs/tree.o 00:03:36.020 LIB libspdk_bdev.a 00:03:36.278 SO libspdk_bdev.so.15.0 00:03:36.278 SYMLINK libspdk_bdev.so 00:03:36.546 CC lib/scsi/dev.o 00:03:36.546 CC lib/ublk/ublk.o 00:03:36.546 CC lib/nbd/nbd.o 00:03:36.546 CC lib/nvmf/ctrlr.o 00:03:36.546 CC lib/scsi/lun.o 00:03:36.546 CC lib/ublk/ublk_rpc.o 00:03:36.546 CC lib/nbd/nbd_rpc.o 00:03:36.546 CC lib/ftl/ftl_core.o 00:03:36.546 CC lib/nvmf/ctrlr_discovery.o 00:03:36.546 CC lib/scsi/port.o 00:03:36.546 CC lib/nvmf/ctrlr_bdev.o 00:03:36.546 CC lib/ftl/ftl_init.o 00:03:36.546 CC lib/scsi/scsi.o 00:03:36.546 CC lib/nvmf/subsystem.o 00:03:36.546 CC lib/ftl/ftl_layout.o 00:03:36.546 CC lib/scsi/scsi_bdev.o 00:03:36.546 CC lib/nvmf/nvmf.o 00:03:36.546 CC lib/ftl/ftl_io.o 00:03:36.546 CC lib/scsi/scsi_pr.o 00:03:36.546 CC lib/ftl/ftl_debug.o 00:03:36.546 CC lib/nvmf/nvmf_rpc.o 00:03:36.546 CC lib/scsi/scsi_rpc.o 00:03:36.546 CC lib/ftl/ftl_sb.o 00:03:36.546 CC lib/scsi/task.o 00:03:36.546 CC lib/nvmf/transport.o 00:03:36.546 CC lib/ftl/ftl_l2p.o 00:03:36.546 CC lib/nvmf/tcp.o 00:03:36.546 CC lib/ftl/ftl_l2p_flat.o 00:03:36.546 CC lib/nvmf/stubs.o 00:03:36.546 CC lib/ftl/ftl_nv_cache.o 00:03:36.546 CC lib/nvmf/vfio_user.o 00:03:36.546 CC lib/nvmf/mdns_server.o 00:03:36.546 CC lib/ftl/ftl_band.o 00:03:36.546 CC lib/ftl/ftl_band_ops.o 00:03:36.546 CC lib/nvmf/rdma.o 00:03:36.546 CC lib/ftl/ftl_writer.o 00:03:36.546 CC lib/nvmf/auth.o 00:03:36.546 CC lib/ftl/ftl_rq.o 00:03:36.546 CC lib/ftl/ftl_reloc.o 00:03:36.546 CC lib/ftl/ftl_l2p_cache.o 00:03:36.546 CC lib/ftl/ftl_p2l.o 00:03:36.546 CC lib/ftl/mngt/ftl_mngt.o 00:03:36.546 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:36.546 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.546 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.546 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:36.546 LIB libspdk_blobfs.a 00:03:36.546 SO libspdk_blobfs.so.10.0 00:03:36.807 LIB libspdk_lvol.a 00:03:36.807 SO libspdk_lvol.so.10.0 00:03:36.807 SYMLINK libspdk_blobfs.so 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.807 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.807 SYMLINK libspdk_lvol.so 00:03:36.807 CC lib/ftl/utils/ftl_conf.o 00:03:36.807 CC lib/ftl/utils/ftl_md.o 00:03:36.807 CC lib/ftl/utils/ftl_mempool.o 00:03:36.807 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.807 CC lib/ftl/utils/ftl_property.o 00:03:36.807 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.807 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:37.068 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:37.068 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:37.068 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:37.068 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:37.068 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:37.068 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:37.068 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:37.068 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:37.068 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:37.068 CC lib/ftl/base/ftl_base_dev.o 00:03:37.068 CC lib/ftl/base/ftl_base_bdev.o 00:03:37.068 CC lib/ftl/ftl_trace.o 00:03:37.327 LIB libspdk_nbd.a 00:03:37.327 SO libspdk_nbd.so.7.0 00:03:37.327 SYMLINK libspdk_nbd.so 00:03:37.327 LIB libspdk_scsi.a 00:03:37.327 SO libspdk_scsi.so.9.0 00:03:37.327 LIB libspdk_ublk.a 00:03:37.585 SO libspdk_ublk.so.3.0 00:03:37.585 SYMLINK libspdk_ublk.so 00:03:37.585 SYMLINK libspdk_scsi.so 00:03:37.585 CC lib/iscsi/conn.o 00:03:37.585 CC lib/vhost/vhost.o 00:03:37.585 CC lib/vhost/vhost_rpc.o 00:03:37.585 CC lib/iscsi/init_grp.o 00:03:37.585 CC lib/iscsi/iscsi.o 00:03:37.585 CC lib/vhost/vhost_scsi.o 00:03:37.585 CC lib/iscsi/md5.o 00:03:37.585 CC lib/vhost/vhost_blk.o 00:03:37.585 CC lib/iscsi/param.o 00:03:37.585 CC lib/vhost/rte_vhost_user.o 00:03:37.585 CC lib/iscsi/portal_grp.o 00:03:37.585 CC lib/iscsi/tgt_node.o 00:03:37.585 CC lib/iscsi/iscsi_subsystem.o 00:03:37.585 CC lib/iscsi/task.o 00:03:37.585 CC lib/iscsi/iscsi_rpc.o 00:03:37.843 LIB libspdk_ftl.a 00:03:38.101 SO libspdk_ftl.so.9.0 00:03:38.359 SYMLINK libspdk_ftl.so 00:03:38.926 LIB libspdk_vhost.a 00:03:38.926 SO libspdk_vhost.so.8.0 00:03:38.926 LIB libspdk_nvmf.a 00:03:39.184 SYMLINK libspdk_vhost.so 00:03:39.184 SO libspdk_nvmf.so.18.0 00:03:39.184 LIB libspdk_iscsi.a 00:03:39.184 SO libspdk_iscsi.so.8.0 00:03:39.442 SYMLINK libspdk_nvmf.so 00:03:39.442 SYMLINK libspdk_iscsi.so 00:03:39.701 CC module/env_dpdk/env_dpdk_rpc.o 00:03:39.701 CC module/vfu_device/vfu_virtio.o 00:03:39.701 CC module/vfu_device/vfu_virtio_blk.o 00:03:39.701 CC module/vfu_device/vfu_virtio_scsi.o 00:03:39.701 CC module/vfu_device/vfu_virtio_rpc.o 00:03:39.701 CC module/accel/dsa/accel_dsa.o 00:03:39.701 CC module/accel/dsa/accel_dsa_rpc.o 00:03:39.701 CC module/keyring/file/keyring.o 00:03:39.701 CC module/keyring/file/keyring_rpc.o 00:03:39.701 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:39.701 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:39.701 CC module/blob/bdev/blob_bdev.o 00:03:39.701 CC module/accel/ioat/accel_ioat.o 00:03:39.701 CC module/keyring/linux/keyring.o 00:03:39.701 CC module/sock/posix/posix.o 00:03:39.701 CC module/accel/iaa/accel_iaa.o 00:03:39.701 CC module/scheduler/gscheduler/gscheduler.o 00:03:39.701 CC module/accel/ioat/accel_ioat_rpc.o 00:03:39.701 CC module/accel/error/accel_error.o 00:03:39.701 CC module/keyring/linux/keyring_rpc.o 00:03:39.701 CC module/accel/iaa/accel_iaa_rpc.o 00:03:39.701 CC module/accel/error/accel_error_rpc.o 00:03:39.701 LIB libspdk_env_dpdk_rpc.a 00:03:39.701 SO libspdk_env_dpdk_rpc.so.6.0 00:03:39.960 SYMLINK libspdk_env_dpdk_rpc.so 00:03:39.960 LIB libspdk_keyring_linux.a 00:03:39.960 LIB libspdk_scheduler_dpdk_governor.a 00:03:39.960 LIB libspdk_scheduler_gscheduler.a 00:03:39.960 LIB libspdk_keyring_file.a 00:03:39.960 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:39.960 SO libspdk_keyring_linux.so.1.0 00:03:39.960 SO libspdk_scheduler_gscheduler.so.4.0 00:03:39.960 SO libspdk_keyring_file.so.1.0 00:03:39.960 LIB libspdk_accel_error.a 00:03:39.960 LIB libspdk_scheduler_dynamic.a 00:03:39.960 LIB libspdk_accel_ioat.a 00:03:39.960 SO libspdk_accel_error.so.2.0 00:03:39.960 LIB libspdk_accel_iaa.a 00:03:39.960 SO libspdk_scheduler_dynamic.so.4.0 00:03:39.960 SO libspdk_accel_ioat.so.6.0 00:03:39.960 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:39.960 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.960 SYMLINK libspdk_keyring_linux.so 00:03:39.960 SYMLINK libspdk_keyring_file.so 00:03:39.960 SO libspdk_accel_iaa.so.3.0 00:03:39.960 SYMLINK libspdk_accel_error.so 00:03:39.960 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.960 LIB libspdk_accel_dsa.a 00:03:39.960 SYMLINK libspdk_accel_ioat.so 00:03:39.960 LIB libspdk_blob_bdev.a 00:03:39.960 SO libspdk_accel_dsa.so.5.0 00:03:39.960 SYMLINK libspdk_accel_iaa.so 00:03:39.960 SO libspdk_blob_bdev.so.11.0 00:03:39.960 SYMLINK libspdk_blob_bdev.so 00:03:39.960 SYMLINK libspdk_accel_dsa.so 00:03:40.220 LIB libspdk_vfu_device.a 00:03:40.220 CC module/blobfs/bdev/blobfs_bdev.o 00:03:40.220 CC module/bdev/delay/vbdev_delay.o 00:03:40.220 CC module/bdev/gpt/gpt.o 00:03:40.220 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:40.220 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:40.220 CC module/bdev/raid/bdev_raid.o 00:03:40.220 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:40.220 CC module/bdev/nvme/bdev_nvme.o 00:03:40.220 CC module/bdev/lvol/vbdev_lvol.o 00:03:40.220 CC module/bdev/iscsi/bdev_iscsi.o 00:03:40.220 CC module/bdev/nvme/bdev_mdns_client.o 00:03:40.220 CC module/bdev/raid/bdev_raid_rpc.o 00:03:40.220 CC module/bdev/nvme/nvme_rpc.o 00:03:40.220 CC module/bdev/split/vbdev_split.o 00:03:40.220 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:40.220 CC module/bdev/gpt/vbdev_gpt.o 00:03:40.220 CC module/bdev/nvme/vbdev_opal.o 00:03:40.220 CC module/bdev/raid/bdev_raid_sb.o 00:03:40.220 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:40.220 CC module/bdev/split/vbdev_split_rpc.o 00:03:40.220 CC module/bdev/raid/raid0.o 00:03:40.220 CC module/bdev/malloc/bdev_malloc.o 00:03:40.220 CC module/bdev/aio/bdev_aio.o 00:03:40.220 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:40.220 CC module/bdev/raid/raid1.o 00:03:40.220 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:40.220 CC module/bdev/aio/bdev_aio_rpc.o 00:03:40.220 CC module/bdev/passthru/vbdev_passthru.o 00:03:40.220 CC module/bdev/raid/concat.o 00:03:40.220 CC module/bdev/error/vbdev_error.o 00:03:40.220 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:40.220 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:40.220 CC module/bdev/null/bdev_null.o 00:03:40.220 CC module/bdev/ftl/bdev_ftl.o 00:03:40.220 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:40.220 CC module/bdev/error/vbdev_error_rpc.o 00:03:40.220 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:40.220 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:40.220 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:40.220 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:40.220 CC module/bdev/null/bdev_null_rpc.o 00:03:40.220 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:40.220 SO libspdk_vfu_device.so.3.0 00:03:40.480 SYMLINK libspdk_vfu_device.so 00:03:40.480 LIB libspdk_sock_posix.a 00:03:40.738 SO libspdk_sock_posix.so.6.0 00:03:40.738 LIB libspdk_blobfs_bdev.a 00:03:40.738 SYMLINK libspdk_sock_posix.so 00:03:40.738 SO libspdk_blobfs_bdev.so.6.0 00:03:40.738 LIB libspdk_bdev_split.a 00:03:40.738 SO libspdk_bdev_split.so.6.0 00:03:40.738 LIB libspdk_bdev_error.a 00:03:40.738 SYMLINK libspdk_blobfs_bdev.so 00:03:40.738 LIB libspdk_bdev_gpt.a 00:03:40.738 LIB libspdk_bdev_null.a 00:03:40.738 SO libspdk_bdev_error.so.6.0 00:03:40.738 LIB libspdk_bdev_passthru.a 00:03:40.738 SO libspdk_bdev_gpt.so.6.0 00:03:40.738 SYMLINK libspdk_bdev_split.so 00:03:40.738 SO libspdk_bdev_null.so.6.0 00:03:40.996 SO libspdk_bdev_passthru.so.6.0 00:03:40.996 LIB libspdk_bdev_ftl.a 00:03:40.996 SYMLINK libspdk_bdev_error.so 00:03:40.996 LIB libspdk_bdev_malloc.a 00:03:40.996 LIB libspdk_bdev_iscsi.a 00:03:40.996 SYMLINK libspdk_bdev_gpt.so 00:03:40.996 LIB libspdk_bdev_aio.a 00:03:40.996 SO libspdk_bdev_ftl.so.6.0 00:03:40.996 LIB libspdk_bdev_zone_block.a 00:03:40.996 SYMLINK libspdk_bdev_null.so 00:03:40.996 SYMLINK libspdk_bdev_passthru.so 00:03:40.996 SO libspdk_bdev_iscsi.so.6.0 00:03:40.996 SO libspdk_bdev_malloc.so.6.0 00:03:40.996 SO libspdk_bdev_aio.so.6.0 00:03:40.996 SO libspdk_bdev_zone_block.so.6.0 00:03:40.996 LIB libspdk_bdev_delay.a 00:03:40.996 SYMLINK libspdk_bdev_ftl.so 00:03:40.996 SYMLINK libspdk_bdev_iscsi.so 00:03:40.997 SYMLINK libspdk_bdev_malloc.so 00:03:40.997 SYMLINK libspdk_bdev_aio.so 00:03:40.997 SO libspdk_bdev_delay.so.6.0 00:03:40.997 SYMLINK libspdk_bdev_zone_block.so 00:03:40.997 SYMLINK libspdk_bdev_delay.so 00:03:40.997 LIB libspdk_bdev_virtio.a 00:03:40.997 SO libspdk_bdev_virtio.so.6.0 00:03:40.997 LIB libspdk_bdev_lvol.a 00:03:41.255 SO libspdk_bdev_lvol.so.6.0 00:03:41.255 SYMLINK libspdk_bdev_virtio.so 00:03:41.255 SYMLINK libspdk_bdev_lvol.so 00:03:41.513 LIB libspdk_bdev_raid.a 00:03:41.513 SO libspdk_bdev_raid.so.6.0 00:03:41.513 SYMLINK libspdk_bdev_raid.so 00:03:42.947 LIB libspdk_bdev_nvme.a 00:03:42.947 SO libspdk_bdev_nvme.so.7.0 00:03:42.947 SYMLINK libspdk_bdev_nvme.so 00:03:43.206 CC module/event/subsystems/sock/sock.o 00:03:43.206 CC module/event/subsystems/scheduler/scheduler.o 00:03:43.206 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:43.206 CC module/event/subsystems/keyring/keyring.o 00:03:43.206 CC module/event/subsystems/iobuf/iobuf.o 00:03:43.206 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:43.206 CC module/event/subsystems/vmd/vmd.o 00:03:43.206 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:43.206 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:43.464 LIB libspdk_event_keyring.a 00:03:43.464 LIB libspdk_event_vhost_blk.a 00:03:43.464 LIB libspdk_event_sock.a 00:03:43.464 LIB libspdk_event_vfu_tgt.a 00:03:43.464 LIB libspdk_event_scheduler.a 00:03:43.464 LIB libspdk_event_vmd.a 00:03:43.464 SO libspdk_event_keyring.so.1.0 00:03:43.464 LIB libspdk_event_iobuf.a 00:03:43.464 SO libspdk_event_sock.so.5.0 00:03:43.464 SO libspdk_event_vhost_blk.so.3.0 00:03:43.464 SO libspdk_event_vfu_tgt.so.3.0 00:03:43.464 SO libspdk_event_scheduler.so.4.0 00:03:43.464 SO libspdk_event_vmd.so.6.0 00:03:43.464 SO libspdk_event_iobuf.so.3.0 00:03:43.464 SYMLINK libspdk_event_keyring.so 00:03:43.464 SYMLINK libspdk_event_vhost_blk.so 00:03:43.464 SYMLINK libspdk_event_sock.so 00:03:43.464 SYMLINK libspdk_event_vfu_tgt.so 00:03:43.464 SYMLINK libspdk_event_scheduler.so 00:03:43.464 SYMLINK libspdk_event_vmd.so 00:03:43.464 SYMLINK libspdk_event_iobuf.so 00:03:43.723 CC module/event/subsystems/accel/accel.o 00:03:43.723 LIB libspdk_event_accel.a 00:03:43.723 SO libspdk_event_accel.so.6.0 00:03:43.982 SYMLINK libspdk_event_accel.so 00:03:43.982 CC module/event/subsystems/bdev/bdev.o 00:03:44.239 LIB libspdk_event_bdev.a 00:03:44.239 SO libspdk_event_bdev.so.6.0 00:03:44.239 SYMLINK libspdk_event_bdev.so 00:03:44.498 CC module/event/subsystems/scsi/scsi.o 00:03:44.498 CC module/event/subsystems/ublk/ublk.o 00:03:44.498 CC module/event/subsystems/nbd/nbd.o 00:03:44.498 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:44.498 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:44.498 LIB libspdk_event_nbd.a 00:03:44.498 LIB libspdk_event_ublk.a 00:03:44.498 LIB libspdk_event_scsi.a 00:03:44.756 SO libspdk_event_nbd.so.6.0 00:03:44.756 SO libspdk_event_ublk.so.3.0 00:03:44.756 SO libspdk_event_scsi.so.6.0 00:03:44.756 SYMLINK libspdk_event_nbd.so 00:03:44.756 SYMLINK libspdk_event_ublk.so 00:03:44.756 SYMLINK libspdk_event_scsi.so 00:03:44.756 LIB libspdk_event_nvmf.a 00:03:44.756 SO libspdk_event_nvmf.so.6.0 00:03:44.756 SYMLINK libspdk_event_nvmf.so 00:03:44.756 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:44.756 CC module/event/subsystems/iscsi/iscsi.o 00:03:45.015 LIB libspdk_event_vhost_scsi.a 00:03:45.015 LIB libspdk_event_iscsi.a 00:03:45.015 SO libspdk_event_vhost_scsi.so.3.0 00:03:45.015 SO libspdk_event_iscsi.so.6.0 00:03:45.015 SYMLINK libspdk_event_vhost_scsi.so 00:03:45.015 SYMLINK libspdk_event_iscsi.so 00:03:45.273 SO libspdk.so.6.0 00:03:45.273 SYMLINK libspdk.so 00:03:45.539 CC app/trace_record/trace_record.o 00:03:45.539 CC app/spdk_top/spdk_top.o 00:03:45.539 CC app/spdk_lspci/spdk_lspci.o 00:03:45.539 CC app/spdk_nvme_identify/identify.o 00:03:45.539 CC app/spdk_nvme_perf/perf.o 00:03:45.539 CXX app/trace/trace.o 00:03:45.539 CC app/spdk_nvme_discover/discovery_aer.o 00:03:45.539 CC test/rpc_client/rpc_client_test.o 00:03:45.539 TEST_HEADER include/spdk/accel.h 00:03:45.539 TEST_HEADER include/spdk/accel_module.h 00:03:45.539 TEST_HEADER include/spdk/assert.h 00:03:45.539 TEST_HEADER include/spdk/barrier.h 00:03:45.539 TEST_HEADER include/spdk/base64.h 00:03:45.539 TEST_HEADER include/spdk/bdev.h 00:03:45.539 TEST_HEADER include/spdk/bdev_module.h 00:03:45.539 TEST_HEADER include/spdk/bdev_zone.h 00:03:45.539 TEST_HEADER include/spdk/bit_array.h 00:03:45.539 TEST_HEADER include/spdk/bit_pool.h 00:03:45.539 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.539 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.539 TEST_HEADER include/spdk/blobfs.h 00:03:45.539 TEST_HEADER include/spdk/blob.h 00:03:45.539 TEST_HEADER include/spdk/conf.h 00:03:45.539 TEST_HEADER include/spdk/config.h 00:03:45.539 CC app/spdk_dd/spdk_dd.o 00:03:45.539 TEST_HEADER include/spdk/cpuset.h 00:03:45.539 TEST_HEADER include/spdk/crc16.h 00:03:45.539 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:45.539 TEST_HEADER include/spdk/crc32.h 00:03:45.539 CC app/iscsi_tgt/iscsi_tgt.o 00:03:45.539 TEST_HEADER include/spdk/crc64.h 00:03:45.539 TEST_HEADER include/spdk/dif.h 00:03:45.539 CC app/nvmf_tgt/nvmf_main.o 00:03:45.539 TEST_HEADER include/spdk/dma.h 00:03:45.539 TEST_HEADER include/spdk/endian.h 00:03:45.539 CC app/vhost/vhost.o 00:03:45.539 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.539 TEST_HEADER include/spdk/env.h 00:03:45.539 TEST_HEADER include/spdk/event.h 00:03:45.539 TEST_HEADER include/spdk/fd_group.h 00:03:45.539 TEST_HEADER include/spdk/fd.h 00:03:45.539 TEST_HEADER include/spdk/file.h 00:03:45.539 TEST_HEADER include/spdk/ftl.h 00:03:45.539 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.539 TEST_HEADER include/spdk/hexlify.h 00:03:45.539 TEST_HEADER include/spdk/histogram_data.h 00:03:45.539 TEST_HEADER include/spdk/idxd.h 00:03:45.539 CC examples/util/zipf/zipf.o 00:03:45.539 CC examples/idxd/perf/perf.o 00:03:45.539 CC app/spdk_tgt/spdk_tgt.o 00:03:45.539 CC examples/nvme/reconnect/reconnect.o 00:03:45.539 CC test/event/reactor_perf/reactor_perf.o 00:03:45.539 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.539 CC examples/ioat/perf/perf.o 00:03:45.539 CC examples/nvme/hello_world/hello_world.o 00:03:45.539 CC test/event/reactor/reactor.o 00:03:45.539 TEST_HEADER include/spdk/init.h 00:03:45.539 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.539 CC examples/nvme/arbitration/arbitration.o 00:03:45.539 CC examples/vmd/led/led.o 00:03:45.539 TEST_HEADER include/spdk/ioat.h 00:03:45.539 CC test/event/event_perf/event_perf.o 00:03:45.539 CC app/fio/nvme/fio_plugin.o 00:03:45.539 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.539 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.539 CC test/nvme/aer/aer.o 00:03:45.539 CC examples/nvme/abort/abort.o 00:03:45.539 CC examples/sock/hello_world/hello_sock.o 00:03:45.539 CC test/thread/poller_perf/poller_perf.o 00:03:45.539 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.539 TEST_HEADER include/spdk/json.h 00:03:45.539 CC examples/accel/perf/accel_perf.o 00:03:45.539 CC examples/nvme/hotplug/hotplug.o 00:03:45.539 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.539 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.539 TEST_HEADER include/spdk/keyring.h 00:03:45.539 TEST_HEADER include/spdk/keyring_module.h 00:03:45.539 TEST_HEADER include/spdk/likely.h 00:03:45.539 TEST_HEADER include/spdk/log.h 00:03:45.539 TEST_HEADER include/spdk/lvol.h 00:03:45.539 TEST_HEADER include/spdk/memory.h 00:03:45.539 TEST_HEADER include/spdk/mmio.h 00:03:45.539 TEST_HEADER include/spdk/nbd.h 00:03:45.539 CC examples/blob/cli/blobcli.o 00:03:45.539 TEST_HEADER include/spdk/notify.h 00:03:45.539 CC test/blobfs/mkfs/mkfs.o 00:03:45.539 CC examples/thread/thread/thread_ex.o 00:03:45.539 TEST_HEADER include/spdk/nvme.h 00:03:45.539 CC examples/blob/hello_world/hello_blob.o 00:03:45.539 CC examples/bdev/hello_world/hello_bdev.o 00:03:45.539 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.539 CC test/dma/test_dma/test_dma.o 00:03:45.539 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.539 CC test/accel/dif/dif.o 00:03:45.798 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.798 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.798 CC test/bdev/bdevio/bdevio.o 00:03:45.798 CC examples/bdev/bdevperf/bdevperf.o 00:03:45.798 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.798 CC examples/nvmf/nvmf/nvmf.o 00:03:45.798 CC test/app/bdev_svc/bdev_svc.o 00:03:45.798 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.798 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.798 TEST_HEADER include/spdk/nvmf.h 00:03:45.798 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.798 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.798 TEST_HEADER include/spdk/opal.h 00:03:45.798 TEST_HEADER include/spdk/opal_spec.h 00:03:45.798 TEST_HEADER include/spdk/pci_ids.h 00:03:45.798 TEST_HEADER include/spdk/pipe.h 00:03:45.798 TEST_HEADER include/spdk/queue.h 00:03:45.798 TEST_HEADER include/spdk/reduce.h 00:03:45.798 TEST_HEADER include/spdk/rpc.h 00:03:45.798 TEST_HEADER include/spdk/scheduler.h 00:03:45.798 TEST_HEADER include/spdk/scsi.h 00:03:45.798 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.798 TEST_HEADER include/spdk/sock.h 00:03:45.798 TEST_HEADER include/spdk/stdinc.h 00:03:45.798 TEST_HEADER include/spdk/string.h 00:03:45.798 TEST_HEADER include/spdk/thread.h 00:03:45.798 LINK spdk_lspci 00:03:45.798 TEST_HEADER include/spdk/trace.h 00:03:45.798 TEST_HEADER include/spdk/trace_parser.h 00:03:45.798 TEST_HEADER include/spdk/tree.h 00:03:45.798 TEST_HEADER include/spdk/ublk.h 00:03:45.798 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.798 TEST_HEADER include/spdk/util.h 00:03:45.798 TEST_HEADER include/spdk/uuid.h 00:03:45.798 TEST_HEADER include/spdk/version.h 00:03:45.798 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.798 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.798 TEST_HEADER include/spdk/vhost.h 00:03:45.798 TEST_HEADER include/spdk/vmd.h 00:03:45.798 TEST_HEADER include/spdk/xor.h 00:03:45.798 TEST_HEADER include/spdk/zipf.h 00:03:45.798 CXX test/cpp_headers/accel.o 00:03:45.798 CC test/lvol/esnap/esnap.o 00:03:45.798 LINK rpc_client_test 00:03:45.798 LINK reactor_perf 00:03:45.798 LINK reactor 00:03:45.798 LINK spdk_nvme_discover 00:03:45.798 LINK lsvmd 00:03:45.798 LINK led 00:03:45.798 LINK event_perf 00:03:45.798 LINK zipf 00:03:45.798 LINK interrupt_tgt 00:03:46.062 LINK poller_perf 00:03:46.062 LINK vhost 00:03:46.062 LINK spdk_trace_record 00:03:46.062 LINK nvmf_tgt 00:03:46.062 LINK iscsi_tgt 00:03:46.062 LINK cmb_copy 00:03:46.062 LINK spdk_tgt 00:03:46.062 LINK hello_world 00:03:46.062 LINK ioat_perf 00:03:46.062 LINK mkfs 00:03:46.062 LINK hello_sock 00:03:46.062 LINK bdev_svc 00:03:46.062 LINK hotplug 00:03:46.062 LINK hello_blob 00:03:46.062 LINK hello_bdev 00:03:46.062 LINK thread 00:03:46.062 CXX test/cpp_headers/accel_module.o 00:03:46.062 LINK aer 00:03:46.325 CXX test/cpp_headers/assert.o 00:03:46.325 CXX test/cpp_headers/barrier.o 00:03:46.325 LINK arbitration 00:03:46.325 LINK spdk_dd 00:03:46.325 CXX test/cpp_headers/base64.o 00:03:46.325 LINK idxd_perf 00:03:46.325 LINK reconnect 00:03:46.325 LINK nvmf 00:03:46.325 CC app/fio/bdev/fio_plugin.o 00:03:46.325 LINK spdk_trace 00:03:46.325 LINK abort 00:03:46.325 CC examples/ioat/verify/verify.o 00:03:46.325 CXX test/cpp_headers/bdev.o 00:03:46.325 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.325 CXX test/cpp_headers/bdev_module.o 00:03:46.325 CC test/event/app_repeat/app_repeat.o 00:03:46.325 LINK test_dma 00:03:46.325 CC test/app/histogram_perf/histogram_perf.o 00:03:46.325 CC test/nvme/reset/reset.o 00:03:46.325 LINK bdevio 00:03:46.325 CC test/env/vtophys/vtophys.o 00:03:46.588 CC test/app/jsoncat/jsoncat.o 00:03:46.588 CXX test/cpp_headers/bdev_zone.o 00:03:46.588 CXX test/cpp_headers/bit_array.o 00:03:46.588 CC test/event/scheduler/scheduler.o 00:03:46.588 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.588 LINK dif 00:03:46.588 CXX test/cpp_headers/bit_pool.o 00:03:46.588 CXX test/cpp_headers/blob_bdev.o 00:03:46.588 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:46.588 LINK nvme_manage 00:03:46.588 CC test/app/stub/stub.o 00:03:46.588 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:46.588 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.588 LINK accel_perf 00:03:46.588 CXX test/cpp_headers/blobfs.o 00:03:46.588 CXX test/cpp_headers/blob.o 00:03:46.588 CC test/env/memory/memory_ut.o 00:03:46.588 LINK blobcli 00:03:46.588 CC test/env/pci/pci_ut.o 00:03:46.588 CXX test/cpp_headers/conf.o 00:03:46.588 LINK spdk_nvme 00:03:46.589 CC test/nvme/sgl/sgl.o 00:03:46.589 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:46.858 CC test/nvme/overhead/overhead.o 00:03:46.858 CC test/nvme/e2edp/nvme_dp.o 00:03:46.858 CC test/nvme/startup/startup.o 00:03:46.858 LINK pmr_persistence 00:03:46.858 CC test/nvme/err_injection/err_injection.o 00:03:46.858 LINK app_repeat 00:03:46.858 LINK histogram_perf 00:03:46.858 LINK vtophys 00:03:46.858 CC test/nvme/reserve/reserve.o 00:03:46.858 CXX test/cpp_headers/config.o 00:03:46.858 CXX test/cpp_headers/cpuset.o 00:03:46.858 LINK verify 00:03:46.858 LINK jsoncat 00:03:46.858 CC test/nvme/connect_stress/connect_stress.o 00:03:46.858 CC test/nvme/simple_copy/simple_copy.o 00:03:46.858 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:46.858 CXX test/cpp_headers/crc16.o 00:03:46.858 CC test/nvme/boot_partition/boot_partition.o 00:03:46.858 LINK env_dpdk_post_init 00:03:46.858 CXX test/cpp_headers/crc32.o 00:03:46.858 LINK stub 00:03:46.858 CXX test/cpp_headers/crc64.o 00:03:46.858 CXX test/cpp_headers/dif.o 00:03:46.858 CXX test/cpp_headers/dma.o 00:03:46.858 CXX test/cpp_headers/endian.o 00:03:46.858 LINK mem_callbacks 00:03:46.858 CXX test/cpp_headers/env_dpdk.o 00:03:46.858 LINK reset 00:03:47.121 LINK spdk_nvme_perf 00:03:47.121 CC test/nvme/compliance/nvme_compliance.o 00:03:47.121 LINK scheduler 00:03:47.121 CXX test/cpp_headers/env.o 00:03:47.121 CC test/nvme/fused_ordering/fused_ordering.o 00:03:47.121 LINK spdk_nvme_identify 00:03:47.121 CXX test/cpp_headers/event.o 00:03:47.121 CXX test/cpp_headers/fd_group.o 00:03:47.121 CXX test/cpp_headers/fd.o 00:03:47.121 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:47.121 CC test/nvme/fdp/fdp.o 00:03:47.121 CXX test/cpp_headers/file.o 00:03:47.121 CXX test/cpp_headers/ftl.o 00:03:47.121 CXX test/cpp_headers/gpt_spec.o 00:03:47.121 CXX test/cpp_headers/hexlify.o 00:03:47.121 CXX test/cpp_headers/histogram_data.o 00:03:47.121 CXX test/cpp_headers/idxd.o 00:03:47.121 CC test/nvme/cuse/cuse.o 00:03:47.122 LINK startup 00:03:47.122 CXX test/cpp_headers/idxd_spec.o 00:03:47.122 LINK err_injection 00:03:47.122 LINK bdevperf 00:03:47.122 LINK spdk_top 00:03:47.122 LINK connect_stress 00:03:47.122 LINK reserve 00:03:47.122 LINK sgl 00:03:47.122 CXX test/cpp_headers/init.o 00:03:47.122 CXX test/cpp_headers/ioat.o 00:03:47.122 CXX test/cpp_headers/ioat_spec.o 00:03:47.388 CXX test/cpp_headers/iscsi_spec.o 00:03:47.388 LINK boot_partition 00:03:47.388 LINK spdk_bdev 00:03:47.388 CXX test/cpp_headers/json.o 00:03:47.388 CXX test/cpp_headers/jsonrpc.o 00:03:47.388 LINK simple_copy 00:03:47.388 LINK nvme_dp 00:03:47.388 CXX test/cpp_headers/keyring.o 00:03:47.388 CXX test/cpp_headers/keyring_module.o 00:03:47.388 CXX test/cpp_headers/likely.o 00:03:47.388 LINK overhead 00:03:47.388 CXX test/cpp_headers/log.o 00:03:47.388 CXX test/cpp_headers/lvol.o 00:03:47.388 CXX test/cpp_headers/memory.o 00:03:47.388 CXX test/cpp_headers/mmio.o 00:03:47.388 LINK pci_ut 00:03:47.388 LINK nvme_fuzz 00:03:47.388 CXX test/cpp_headers/nbd.o 00:03:47.388 CXX test/cpp_headers/notify.o 00:03:47.388 CXX test/cpp_headers/nvme.o 00:03:47.388 CXX test/cpp_headers/nvme_intel.o 00:03:47.388 CXX test/cpp_headers/nvme_ocssd.o 00:03:47.388 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:47.388 LINK doorbell_aers 00:03:47.388 CXX test/cpp_headers/nvme_spec.o 00:03:47.388 CXX test/cpp_headers/nvme_zns.o 00:03:47.388 LINK fused_ordering 00:03:47.388 CXX test/cpp_headers/nvmf_cmd.o 00:03:47.388 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:47.388 CXX test/cpp_headers/nvmf.o 00:03:47.388 CXX test/cpp_headers/nvmf_spec.o 00:03:47.388 CXX test/cpp_headers/nvmf_transport.o 00:03:47.388 CXX test/cpp_headers/opal.o 00:03:47.388 CXX test/cpp_headers/opal_spec.o 00:03:47.388 CXX test/cpp_headers/pci_ids.o 00:03:47.651 CXX test/cpp_headers/queue.o 00:03:47.652 CXX test/cpp_headers/pipe.o 00:03:47.652 CXX test/cpp_headers/reduce.o 00:03:47.652 CXX test/cpp_headers/rpc.o 00:03:47.652 CXX test/cpp_headers/scheduler.o 00:03:47.652 CXX test/cpp_headers/scsi.o 00:03:47.652 CXX test/cpp_headers/scsi_spec.o 00:03:47.652 CXX test/cpp_headers/sock.o 00:03:47.652 CXX test/cpp_headers/string.o 00:03:47.652 CXX test/cpp_headers/stdinc.o 00:03:47.652 CXX test/cpp_headers/thread.o 00:03:47.652 CXX test/cpp_headers/trace.o 00:03:47.652 CXX test/cpp_headers/trace_parser.o 00:03:47.652 LINK vhost_fuzz 00:03:47.652 CXX test/cpp_headers/tree.o 00:03:47.652 CXX test/cpp_headers/ublk.o 00:03:47.652 CXX test/cpp_headers/util.o 00:03:47.652 LINK nvme_compliance 00:03:47.652 CXX test/cpp_headers/uuid.o 00:03:47.652 CXX test/cpp_headers/version.o 00:03:47.652 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.652 CXX test/cpp_headers/vfio_user_spec.o 00:03:47.652 CXX test/cpp_headers/vhost.o 00:03:47.652 CXX test/cpp_headers/vmd.o 00:03:47.652 CXX test/cpp_headers/xor.o 00:03:47.652 CXX test/cpp_headers/zipf.o 00:03:47.652 LINK fdp 00:03:48.585 LINK memory_ut 00:03:48.844 LINK iscsi_fuzz 00:03:48.844 LINK cuse 00:03:52.132 LINK esnap 00:03:52.132 00:03:52.132 real 0m40.443s 00:03:52.132 user 7m34.327s 00:03:52.132 sys 1m48.844s 00:03:52.132 19:34:01 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:52.132 19:34:01 make -- common/autotest_common.sh@10 -- $ set +x 00:03:52.132 ************************************ 00:03:52.132 END TEST make 00:03:52.132 ************************************ 00:03:52.132 19:34:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:52.132 19:34:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:52.132 19:34:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:52.132 19:34:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:52.132 19:34:01 -- pm/common@44 -- $ pid=3737294 00:03:52.132 19:34:01 -- pm/common@50 -- $ kill -TERM 3737294 00:03:52.132 19:34:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:52.132 19:34:01 -- pm/common@44 -- $ pid=3737296 00:03:52.132 19:34:01 -- pm/common@50 -- $ kill -TERM 3737296 00:03:52.132 19:34:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:52.132 19:34:01 -- pm/common@44 -- $ pid=3737298 00:03:52.132 19:34:01 -- pm/common@50 -- $ kill -TERM 3737298 00:03:52.132 19:34:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:52.132 19:34:01 -- pm/common@44 -- $ pid=3737329 00:03:52.132 19:34:01 -- pm/common@50 -- $ sudo -E kill -TERM 3737329 00:03:52.132 19:34:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:52.132 19:34:01 -- nvmf/common.sh@7 -- # uname -s 00:03:52.132 19:34:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.132 19:34:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.132 19:34:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.132 19:34:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.132 19:34:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.132 19:34:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.132 19:34:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.132 19:34:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.132 19:34:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.132 19:34:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.132 19:34:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:52.132 19:34:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:52.132 19:34:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.132 19:34:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.132 19:34:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:52.132 19:34:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.132 19:34:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:52.132 19:34:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.132 19:34:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.132 19:34:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.132 19:34:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.132 19:34:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.132 19:34:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.132 19:34:01 -- paths/export.sh@5 -- # export PATH 00:03:52.132 19:34:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.132 19:34:01 -- nvmf/common.sh@47 -- # : 0 00:03:52.132 19:34:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:52.132 19:34:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:52.132 19:34:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.132 19:34:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.132 19:34:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.132 19:34:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:52.132 19:34:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:52.132 19:34:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:52.132 19:34:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:52.132 19:34:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:52.132 19:34:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:52.132 19:34:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:52.132 19:34:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:52.132 19:34:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:52.132 19:34:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:52.132 19:34:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.132 19:34:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.132 19:34:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:52.132 19:34:01 -- spdk/autotest.sh@48 -- # udevadm_pid=3813715 00:03:52.132 19:34:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:52.132 19:34:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:52.132 19:34:01 -- pm/common@17 -- # local monitor 00:03:52.132 19:34:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@21 -- # date +%s 00:03:52.132 19:34:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.132 19:34:01 -- pm/common@21 -- # date +%s 00:03:52.132 19:34:01 -- pm/common@25 -- # sleep 1 00:03:52.132 19:34:01 -- pm/common@21 -- # date +%s 00:03:52.132 19:34:01 -- pm/common@21 -- # date +%s 00:03:52.132 19:34:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721928841 00:03:52.132 19:34:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721928841 00:03:52.132 19:34:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721928841 00:03:52.133 19:34:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721928841 00:03:52.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721928841_collect-vmstat.pm.log 00:03:52.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721928841_collect-cpu-load.pm.log 00:03:52.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721928841_collect-cpu-temp.pm.log 00:03:52.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721928841_collect-bmc-pm.bmc.pm.log 00:03:53.067 19:34:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:53.067 19:34:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:53.067 19:34:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:53.067 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:03:53.067 19:34:02 -- spdk/autotest.sh@59 -- # create_test_list 00:03:53.067 19:34:02 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:53.067 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:03:53.067 19:34:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:53.067 19:34:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.067 19:34:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.067 19:34:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:53.067 19:34:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.067 19:34:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:53.067 19:34:02 -- common/autotest_common.sh@1451 -- # uname 00:03:53.067 19:34:02 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:53.067 19:34:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:53.067 19:34:02 -- common/autotest_common.sh@1471 -- # uname 00:03:53.067 19:34:02 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:53.067 19:34:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:53.067 19:34:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:53.067 19:34:02 -- spdk/autotest.sh@72 -- # hash lcov 00:03:53.067 19:34:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:53.067 19:34:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:53.067 --rc lcov_branch_coverage=1 00:03:53.067 --rc lcov_function_coverage=1 00:03:53.067 --rc genhtml_branch_coverage=1 00:03:53.067 --rc genhtml_function_coverage=1 00:03:53.067 --rc genhtml_legend=1 00:03:53.067 --rc geninfo_all_blocks=1 00:03:53.067 ' 00:03:53.067 19:34:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:53.067 --rc lcov_branch_coverage=1 00:03:53.067 --rc lcov_function_coverage=1 00:03:53.067 --rc genhtml_branch_coverage=1 00:03:53.067 --rc genhtml_function_coverage=1 00:03:53.067 --rc genhtml_legend=1 00:03:53.067 --rc geninfo_all_blocks=1 00:03:53.067 ' 00:03:53.067 19:34:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:53.067 --rc lcov_branch_coverage=1 00:03:53.067 --rc lcov_function_coverage=1 00:03:53.067 --rc genhtml_branch_coverage=1 00:03:53.067 --rc genhtml_function_coverage=1 00:03:53.067 --rc genhtml_legend=1 00:03:53.067 --rc geninfo_all_blocks=1 00:03:53.067 --no-external' 00:03:53.067 19:34:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:53.067 --rc lcov_branch_coverage=1 00:03:53.067 --rc lcov_function_coverage=1 00:03:53.067 --rc genhtml_branch_coverage=1 00:03:53.067 --rc genhtml_function_coverage=1 00:03:53.067 --rc genhtml_legend=1 00:03:53.067 --rc geninfo_all_blocks=1 00:03:53.067 --no-external' 00:03:53.067 19:34:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:53.326 lcov: LCOV version 1.14 00:03:53.326 19:34:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:08.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:08.201 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:23.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:23.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:23.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:25.627 19:34:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:25.627 19:34:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:25.627 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:04:25.627 19:34:34 -- spdk/autotest.sh@91 -- # rm -f 00:04:25.627 19:34:34 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.999 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:26.999 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:26.999 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:26.999 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:26.999 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:26.999 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:26.999 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:26.999 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:26.999 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:26.999 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:26.999 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:26.999 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:26.999 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:26.999 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:26.999 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:26.999 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:26.999 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:27.257 19:34:36 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:27.257 19:34:36 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:27.257 19:34:36 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:27.257 19:34:36 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:27.257 19:34:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:27.257 19:34:36 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:27.257 19:34:36 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:27.257 19:34:36 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.257 19:34:36 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:27.257 19:34:36 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:27.257 19:34:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.257 19:34:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.257 19:34:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:27.257 19:34:36 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:27.257 19:34:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:27.257 No valid GPT data, bailing 00:04:27.257 19:34:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.257 19:34:36 -- scripts/common.sh@391 -- # pt= 00:04:27.257 19:34:36 -- scripts/common.sh@392 -- # return 1 00:04:27.257 19:34:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:27.257 1+0 records in 00:04:27.257 1+0 records out 00:04:27.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0023295 s, 450 MB/s 00:04:27.257 19:34:36 -- spdk/autotest.sh@118 -- # sync 00:04:27.257 19:34:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:27.257 19:34:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:27.257 19:34:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:29.158 19:34:38 -- spdk/autotest.sh@124 -- # uname -s 00:04:29.158 19:34:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:29.158 19:34:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:29.158 19:34:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.158 19:34:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.158 19:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:29.158 ************************************ 00:04:29.158 START TEST setup.sh 00:04:29.158 ************************************ 00:04:29.158 19:34:38 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:29.158 * Looking for test storage... 00:04:29.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.158 19:34:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:29.158 19:34:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:29.158 19:34:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:29.158 19:34:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.158 19:34:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.158 19:34:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.158 ************************************ 00:04:29.158 START TEST acl 00:04:29.158 ************************************ 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:29.158 * Looking for test storage... 00:04:29.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.158 19:34:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.158 19:34:38 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:29.158 19:34:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:29.158 19:34:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:29.158 19:34:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:29.158 19:34:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:29.158 19:34:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:29.158 19:34:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.158 19:34:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.532 19:34:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:30.532 19:34:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:30.532 19:34:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.532 19:34:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:30.532 19:34:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.532 19:34:39 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:31.904 Hugepages 00:04:31.904 node hugesize free / total 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 00:04:31.904 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.904 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:31.905 19:34:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:31.905 19:34:41 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.905 19:34:41 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.905 19:34:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.905 ************************************ 00:04:31.905 START TEST denied 00:04:31.905 ************************************ 00:04:31.905 19:34:41 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:31.905 19:34:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:31.905 19:34:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:31.905 19:34:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:31.905 19:34:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.905 19:34:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.280 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.280 19:34:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.806 00:04:35.806 real 0m3.748s 00:04:35.806 user 0m1.066s 00:04:35.807 sys 0m1.795s 00:04:35.807 19:34:44 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.807 19:34:44 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:35.807 ************************************ 00:04:35.807 END TEST denied 00:04:35.807 ************************************ 00:04:35.807 19:34:44 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:35.807 19:34:44 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.807 19:34:44 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.807 19:34:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.807 ************************************ 00:04:35.807 START TEST allowed 00:04:35.807 ************************************ 00:04:35.807 19:34:44 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:35.807 19:34:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:35.807 19:34:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:35.807 19:34:44 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:35.807 19:34:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.807 19:34:44 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.336 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.336 19:34:47 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:38.336 19:34:47 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:38.336 19:34:47 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:38.336 19:34:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.336 19:34:47 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.711 00:04:39.711 real 0m3.843s 00:04:39.711 user 0m0.971s 00:04:39.711 sys 0m1.701s 00:04:39.711 19:34:48 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.711 19:34:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:39.711 ************************************ 00:04:39.711 END TEST allowed 00:04:39.711 ************************************ 00:04:39.711 00:04:39.712 real 0m10.458s 00:04:39.712 user 0m3.142s 00:04:39.712 sys 0m5.316s 00:04:39.712 19:34:48 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.712 19:34:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.712 ************************************ 00:04:39.712 END TEST acl 00:04:39.712 ************************************ 00:04:39.712 19:34:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:39.712 19:34:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.712 19:34:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.712 19:34:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.712 ************************************ 00:04:39.712 START TEST hugepages 00:04:39.712 ************************************ 00:04:39.712 19:34:48 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:39.712 * Looking for test storage... 00:04:39.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41299272 kB' 'MemAvailable: 44783144 kB' 'Buffers: 2704 kB' 'Cached: 12724852 kB' 'SwapCached: 0 kB' 'Active: 9707136 kB' 'Inactive: 3491728 kB' 'Active(anon): 9319368 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473688 kB' 'Mapped: 205968 kB' 'Shmem: 8848060 kB' 'KReclaimable: 196432 kB' 'Slab: 559296 kB' 'SReclaimable: 196432 kB' 'SUnreclaim: 362864 kB' 'KernelStack: 12720 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 10407748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.713 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.714 19:34:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:39.714 19:34:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.714 19:34:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.714 19:34:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.714 ************************************ 00:04:39.714 START TEST default_setup 00:04:39.714 ************************************ 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.714 19:34:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.091 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:41.091 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:41.091 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.032 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.032 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43393024 kB' 'MemAvailable: 46876980 kB' 'Buffers: 2704 kB' 'Cached: 12724940 kB' 'SwapCached: 0 kB' 'Active: 9724044 kB' 'Inactive: 3491728 kB' 'Active(anon): 9336276 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491336 kB' 'Mapped: 205656 kB' 'Shmem: 8848148 kB' 'KReclaimable: 196600 kB' 'Slab: 559220 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362620 kB' 'KernelStack: 12560 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10428572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.033 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43396432 kB' 'MemAvailable: 46880388 kB' 'Buffers: 2704 kB' 'Cached: 12724944 kB' 'SwapCached: 0 kB' 'Active: 9724312 kB' 'Inactive: 3491728 kB' 'Active(anon): 9336544 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491684 kB' 'Mapped: 205656 kB' 'Shmem: 8848152 kB' 'KReclaimable: 196600 kB' 'Slab: 559220 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362620 kB' 'KernelStack: 12608 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10428592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.034 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.035 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43397728 kB' 'MemAvailable: 46881684 kB' 'Buffers: 2704 kB' 'Cached: 12724960 kB' 'SwapCached: 0 kB' 'Active: 9724188 kB' 'Inactive: 3491728 kB' 'Active(anon): 9336420 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491524 kB' 'Mapped: 205580 kB' 'Shmem: 8848168 kB' 'KReclaimable: 196600 kB' 'Slab: 559232 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362632 kB' 'KernelStack: 12608 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10428612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.036 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.037 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.038 nr_hugepages=1024 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.038 resv_hugepages=0 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.038 surplus_hugepages=0 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.038 anon_hugepages=0 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.038 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43399000 kB' 'MemAvailable: 46882956 kB' 'Buffers: 2704 kB' 'Cached: 12724984 kB' 'SwapCached: 0 kB' 'Active: 9723984 kB' 'Inactive: 3491728 kB' 'Active(anon): 9336216 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491264 kB' 'Mapped: 205580 kB' 'Shmem: 8848192 kB' 'KReclaimable: 196600 kB' 'Slab: 559232 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362632 kB' 'KernelStack: 12592 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10428636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.039 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.040 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20398248 kB' 'MemUsed: 12478692 kB' 'SwapCached: 0 kB' 'Active: 5858900 kB' 'Inactive: 3354556 kB' 'Active(anon): 5590616 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9059688 kB' 'Mapped: 92904 kB' 'AnonPages: 156944 kB' 'Shmem: 5436848 kB' 'KernelStack: 6984 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90140 kB' 'Slab: 302440 kB' 'SReclaimable: 90140 kB' 'SUnreclaim: 212300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.041 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.042 node0=1024 expecting 1024 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.042 00:04:42.042 real 0m2.410s 00:04:42.042 user 0m0.688s 00:04:42.042 sys 0m0.856s 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.042 19:34:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:42.042 ************************************ 00:04:42.042 END TEST default_setup 00:04:42.042 ************************************ 00:04:42.042 19:34:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:42.042 19:34:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.042 19:34:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.042 19:34:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.042 ************************************ 00:04:42.042 START TEST per_node_1G_alloc 00:04:42.042 ************************************ 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:42.042 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.043 19:34:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.457 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.457 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.457 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.457 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.457 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.458 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.458 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.458 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.458 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.458 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.458 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.458 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.458 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.458 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.458 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.458 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.458 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43409864 kB' 'MemAvailable: 46893820 kB' 'Buffers: 2704 kB' 'Cached: 12725056 kB' 'SwapCached: 0 kB' 'Active: 9729832 kB' 'Inactive: 3491728 kB' 'Active(anon): 9342064 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497020 kB' 'Mapped: 206124 kB' 'Shmem: 8848264 kB' 'KReclaimable: 196600 kB' 'Slab: 559192 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362592 kB' 'KernelStack: 12608 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10435064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195876 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.458 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.459 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43409640 kB' 'MemAvailable: 46893596 kB' 'Buffers: 2704 kB' 'Cached: 12725056 kB' 'SwapCached: 0 kB' 'Active: 9726064 kB' 'Inactive: 3491728 kB' 'Active(anon): 9338296 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493232 kB' 'Mapped: 206568 kB' 'Shmem: 8848264 kB' 'KReclaimable: 196600 kB' 'Slab: 559180 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362580 kB' 'KernelStack: 12640 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10430584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.460 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.461 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43402080 kB' 'MemAvailable: 46886036 kB' 'Buffers: 2704 kB' 'Cached: 12725072 kB' 'SwapCached: 0 kB' 'Active: 9728640 kB' 'Inactive: 3491728 kB' 'Active(anon): 9340872 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495712 kB' 'Mapped: 206020 kB' 'Shmem: 8848280 kB' 'KReclaimable: 196600 kB' 'Slab: 559224 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362624 kB' 'KernelStack: 12640 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10434172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.462 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.463 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.464 nr_hugepages=1024 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.464 resv_hugepages=0 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.464 surplus_hugepages=0 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.464 anon_hugepages=0 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43402332 kB' 'MemAvailable: 46886288 kB' 'Buffers: 2704 kB' 'Cached: 12725100 kB' 'SwapCached: 0 kB' 'Active: 9730212 kB' 'Inactive: 3491728 kB' 'Active(anon): 9342444 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497272 kB' 'Mapped: 206500 kB' 'Shmem: 8848308 kB' 'KReclaimable: 196600 kB' 'Slab: 559224 kB' 'SReclaimable: 196600 kB' 'SUnreclaim: 362624 kB' 'KernelStack: 12624 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10435128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195892 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.464 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.465 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.465 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.728 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.728 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.729 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21456196 kB' 'MemUsed: 11420744 kB' 'SwapCached: 0 kB' 'Active: 5859072 kB' 'Inactive: 3354556 kB' 'Active(anon): 5590788 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9059696 kB' 'Mapped: 92904 kB' 'AnonPages: 157040 kB' 'Shmem: 5436856 kB' 'KernelStack: 6968 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90140 kB' 'Slab: 302436 kB' 'SReclaimable: 90140 kB' 'SUnreclaim: 212296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.730 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.731 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21945660 kB' 'MemUsed: 5719112 kB' 'SwapCached: 0 kB' 'Active: 3865436 kB' 'Inactive: 137172 kB' 'Active(anon): 3745952 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3668152 kB' 'Mapped: 112680 kB' 'AnonPages: 334540 kB' 'Shmem: 3411496 kB' 'KernelStack: 5656 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106460 kB' 'Slab: 256788 kB' 'SReclaimable: 106460 kB' 'SUnreclaim: 150328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.732 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:43.733 node0=512 expecting 512 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:43.733 node1=512 expecting 512 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:43.733 00:04:43.733 real 0m1.504s 00:04:43.733 user 0m0.567s 00:04:43.733 sys 0m0.893s 00:04:43.733 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.734 19:34:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.734 ************************************ 00:04:43.734 END TEST per_node_1G_alloc 00:04:43.734 ************************************ 00:04:43.734 19:34:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:43.734 19:34:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.734 19:34:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.734 19:34:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.734 ************************************ 00:04:43.734 START TEST even_2G_alloc 00:04:43.734 ************************************ 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.734 19:34:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.734 19:34:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.669 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.669 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.669 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.669 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.669 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.669 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.669 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.669 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.669 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.669 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.669 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.669 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.669 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.669 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.669 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.931 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.931 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43413612 kB' 'MemAvailable: 46897556 kB' 'Buffers: 2704 kB' 'Cached: 12725196 kB' 'SwapCached: 0 kB' 'Active: 9721152 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333384 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488248 kB' 'Mapped: 204764 kB' 'Shmem: 8848404 kB' 'KReclaimable: 196576 kB' 'Slab: 558956 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362380 kB' 'KernelStack: 12576 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10413728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.931 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.932 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43413028 kB' 'MemAvailable: 46896972 kB' 'Buffers: 2704 kB' 'Cached: 12725200 kB' 'SwapCached: 0 kB' 'Active: 9721128 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333360 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488192 kB' 'Mapped: 204748 kB' 'Shmem: 8848408 kB' 'KReclaimable: 196576 kB' 'Slab: 558936 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362360 kB' 'KernelStack: 12624 kB' 'PageTables: 7680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10413744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.933 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.934 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43414872 kB' 'MemAvailable: 46898816 kB' 'Buffers: 2704 kB' 'Cached: 12725216 kB' 'SwapCached: 0 kB' 'Active: 9720856 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333088 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487860 kB' 'Mapped: 204748 kB' 'Shmem: 8848424 kB' 'KReclaimable: 196576 kB' 'Slab: 558932 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362356 kB' 'KernelStack: 12592 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10413764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.935 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.936 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.937 nr_hugepages=1024 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.937 resv_hugepages=0 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.937 surplus_hugepages=0 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.937 anon_hugepages=0 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43414912 kB' 'MemAvailable: 46898856 kB' 'Buffers: 2704 kB' 'Cached: 12725240 kB' 'SwapCached: 0 kB' 'Active: 9721144 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333376 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488140 kB' 'Mapped: 204748 kB' 'Shmem: 8848448 kB' 'KReclaimable: 196576 kB' 'Slab: 558924 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362348 kB' 'KernelStack: 12624 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10413788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.937 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.938 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21465660 kB' 'MemUsed: 11411280 kB' 'SwapCached: 0 kB' 'Active: 5858328 kB' 'Inactive: 3354556 kB' 'Active(anon): 5590044 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9059780 kB' 'Mapped: 92300 kB' 'AnonPages: 156316 kB' 'Shmem: 5436940 kB' 'KernelStack: 6984 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90140 kB' 'Slab: 302200 kB' 'SReclaimable: 90140 kB' 'SUnreclaim: 212060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.200 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.201 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21956880 kB' 'MemUsed: 5707892 kB' 'SwapCached: 0 kB' 'Active: 3862824 kB' 'Inactive: 137172 kB' 'Active(anon): 3743340 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3668184 kB' 'Mapped: 112448 kB' 'AnonPages: 331820 kB' 'Shmem: 3411528 kB' 'KernelStack: 5640 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106436 kB' 'Slab: 256704 kB' 'SReclaimable: 106436 kB' 'SUnreclaim: 150268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.202 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.203 node0=512 expecting 512 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:45.203 node1=512 expecting 512 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:45.203 00:04:45.203 real 0m1.414s 00:04:45.203 user 0m0.618s 00:04:45.203 sys 0m0.758s 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.203 19:34:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.203 ************************************ 00:04:45.203 END TEST even_2G_alloc 00:04:45.203 ************************************ 00:04:45.203 19:34:54 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:45.203 19:34:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.203 19:34:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.203 19:34:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.203 ************************************ 00:04:45.203 START TEST odd_alloc 00:04:45.203 ************************************ 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:45.203 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:45.204 19:34:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:45.204 19:34:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.204 19:34:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.139 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.139 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.139 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.139 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.139 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.139 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.139 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.139 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.139 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.139 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.139 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.139 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.139 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.139 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.139 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.139 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.139 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43413972 kB' 'MemAvailable: 46897916 kB' 'Buffers: 2704 kB' 'Cached: 12725328 kB' 'SwapCached: 0 kB' 'Active: 9723024 kB' 'Inactive: 3491728 kB' 'Active(anon): 9335256 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489896 kB' 'Mapped: 204872 kB' 'Shmem: 8848536 kB' 'KReclaimable: 196576 kB' 'Slab: 559136 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362560 kB' 'KernelStack: 13232 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10416516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196304 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.404 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43414192 kB' 'MemAvailable: 46898136 kB' 'Buffers: 2704 kB' 'Cached: 12725328 kB' 'SwapCached: 0 kB' 'Active: 9722684 kB' 'Inactive: 3491728 kB' 'Active(anon): 9334916 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489556 kB' 'Mapped: 204840 kB' 'Shmem: 8848536 kB' 'KReclaimable: 196576 kB' 'Slab: 559100 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362524 kB' 'KernelStack: 13104 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10414164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.405 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.406 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43415968 kB' 'MemAvailable: 46899912 kB' 'Buffers: 2704 kB' 'Cached: 12725344 kB' 'SwapCached: 0 kB' 'Active: 9721428 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333660 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488300 kB' 'Mapped: 204764 kB' 'Shmem: 8848552 kB' 'KReclaimable: 196576 kB' 'Slab: 559036 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362460 kB' 'KernelStack: 12640 kB' 'PageTables: 7380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10414184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.407 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.408 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:46.409 nr_hugepages=1025 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.409 resv_hugepages=0 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.409 surplus_hugepages=0 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.409 anon_hugepages=0 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43415720 kB' 'MemAvailable: 46899664 kB' 'Buffers: 2704 kB' 'Cached: 12725364 kB' 'SwapCached: 0 kB' 'Active: 9721236 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333468 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488120 kB' 'Mapped: 204764 kB' 'Shmem: 8848572 kB' 'KReclaimable: 196576 kB' 'Slab: 559164 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362588 kB' 'KernelStack: 12608 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10414204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.409 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.410 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.411 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21465796 kB' 'MemUsed: 11411144 kB' 'SwapCached: 0 kB' 'Active: 5857260 kB' 'Inactive: 3354556 kB' 'Active(anon): 5588976 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9059908 kB' 'Mapped: 92240 kB' 'AnonPages: 155092 kB' 'Shmem: 5437068 kB' 'KernelStack: 6952 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90140 kB' 'Slab: 302284 kB' 'SReclaimable: 90140 kB' 'SUnreclaim: 212144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.672 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.673 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21948952 kB' 'MemUsed: 5715820 kB' 'SwapCached: 0 kB' 'Active: 3864012 kB' 'Inactive: 137172 kB' 'Active(anon): 3744528 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3668188 kB' 'Mapped: 112464 kB' 'AnonPages: 333052 kB' 'Shmem: 3411532 kB' 'KernelStack: 5656 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106436 kB' 'Slab: 256872 kB' 'SReclaimable: 106436 kB' 'SUnreclaim: 150436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.674 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:46.675 node0=512 expecting 513 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:46.675 node1=513 expecting 512 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:46.675 00:04:46.675 real 0m1.415s 00:04:46.675 user 0m0.664s 00:04:46.675 sys 0m0.711s 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.675 19:34:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.675 ************************************ 00:04:46.675 END TEST odd_alloc 00:04:46.675 ************************************ 00:04:46.675 19:34:55 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:46.675 19:34:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.675 19:34:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.675 19:34:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.675 ************************************ 00:04:46.675 START TEST custom_alloc 00:04:46.675 ************************************ 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.675 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.676 19:34:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.611 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.611 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.611 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.611 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.611 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.611 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.611 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.920 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.920 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.920 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.920 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.920 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.920 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.920 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.920 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.920 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.920 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42347596 kB' 'MemAvailable: 45831540 kB' 'Buffers: 2704 kB' 'Cached: 12725464 kB' 'SwapCached: 0 kB' 'Active: 9722344 kB' 'Inactive: 3491728 kB' 'Active(anon): 9334576 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489156 kB' 'Mapped: 204880 kB' 'Shmem: 8848672 kB' 'KReclaimable: 196576 kB' 'Slab: 559124 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362548 kB' 'KernelStack: 12704 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10414408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.920 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.921 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42348000 kB' 'MemAvailable: 45831944 kB' 'Buffers: 2704 kB' 'Cached: 12725468 kB' 'SwapCached: 0 kB' 'Active: 9721920 kB' 'Inactive: 3491728 kB' 'Active(anon): 9334152 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488752 kB' 'Mapped: 204856 kB' 'Shmem: 8848676 kB' 'KReclaimable: 196576 kB' 'Slab: 559100 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362524 kB' 'KernelStack: 12688 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10414428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.922 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.923 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.924 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42351520 kB' 'MemAvailable: 45835464 kB' 'Buffers: 2704 kB' 'Cached: 12725484 kB' 'SwapCached: 0 kB' 'Active: 9721624 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333856 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488388 kB' 'Mapped: 204776 kB' 'Shmem: 8848692 kB' 'KReclaimable: 196576 kB' 'Slab: 559084 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362508 kB' 'KernelStack: 12672 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10414448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.925 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.926 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:48.189 nr_hugepages=1536 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.189 resv_hugepages=0 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.189 surplus_hugepages=0 00:04:48.189 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.189 anon_hugepages=0 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42351912 kB' 'MemAvailable: 45835856 kB' 'Buffers: 2704 kB' 'Cached: 12725508 kB' 'SwapCached: 0 kB' 'Active: 9721628 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333860 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488356 kB' 'Mapped: 204776 kB' 'Shmem: 8848716 kB' 'KReclaimable: 196576 kB' 'Slab: 559052 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362476 kB' 'KernelStack: 12656 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10414468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.190 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.191 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21471284 kB' 'MemUsed: 11405656 kB' 'SwapCached: 0 kB' 'Active: 5857584 kB' 'Inactive: 3354556 kB' 'Active(anon): 5589300 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060032 kB' 'Mapped: 92736 kB' 'AnonPages: 155244 kB' 'Shmem: 5437192 kB' 'KernelStack: 6984 kB' 'PageTables: 3204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90140 kB' 'Slab: 302264 kB' 'SReclaimable: 90140 kB' 'SUnreclaim: 212124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.192 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 20876812 kB' 'MemUsed: 6787960 kB' 'SwapCached: 0 kB' 'Active: 3867492 kB' 'Inactive: 137172 kB' 'Active(anon): 3748008 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3668200 kB' 'Mapped: 112476 kB' 'AnonPages: 336528 kB' 'Shmem: 3411544 kB' 'KernelStack: 5656 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106436 kB' 'Slab: 256788 kB' 'SReclaimable: 106436 kB' 'SUnreclaim: 150352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.193 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.194 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:48.195 node0=512 expecting 512 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:48.195 node1=1024 expecting 1024 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:48.195 00:04:48.195 real 0m1.469s 00:04:48.195 user 0m0.620s 00:04:48.195 sys 0m0.803s 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.195 19:34:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.195 ************************************ 00:04:48.195 END TEST custom_alloc 00:04:48.195 ************************************ 00:04:48.195 19:34:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:48.195 19:34:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.195 19:34:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.195 19:34:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.195 ************************************ 00:04:48.195 START TEST no_shrink_alloc 00:04:48.195 ************************************ 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.195 19:34:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.130 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.130 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.130 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.130 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.130 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.130 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.130 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.130 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.130 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.130 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.130 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.130 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.130 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.130 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.130 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.130 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.130 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.392 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43346752 kB' 'MemAvailable: 46830696 kB' 'Buffers: 2704 kB' 'Cached: 12725588 kB' 'SwapCached: 0 kB' 'Active: 9721628 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333860 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488332 kB' 'Mapped: 204832 kB' 'Shmem: 8848796 kB' 'KReclaimable: 196576 kB' 'Slab: 559252 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362676 kB' 'KernelStack: 12656 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10414532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.393 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43345496 kB' 'MemAvailable: 46829440 kB' 'Buffers: 2704 kB' 'Cached: 12725588 kB' 'SwapCached: 0 kB' 'Active: 9722212 kB' 'Inactive: 3491728 kB' 'Active(anon): 9334444 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488872 kB' 'Mapped: 204820 kB' 'Shmem: 8848796 kB' 'KReclaimable: 196576 kB' 'Slab: 559252 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362676 kB' 'KernelStack: 12688 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10414548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.394 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.395 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43345668 kB' 'MemAvailable: 46829612 kB' 'Buffers: 2704 kB' 'Cached: 12725600 kB' 'SwapCached: 0 kB' 'Active: 9721872 kB' 'Inactive: 3491728 kB' 'Active(anon): 9334104 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488520 kB' 'Mapped: 204820 kB' 'Shmem: 8848808 kB' 'KReclaimable: 196576 kB' 'Slab: 559300 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362724 kB' 'KernelStack: 12672 kB' 'PageTables: 7680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10414572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.396 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.397 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.398 nr_hugepages=1024 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.398 resv_hugepages=0 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.398 surplus_hugepages=0 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.398 anon_hugepages=0 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.398 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43347268 kB' 'MemAvailable: 46831212 kB' 'Buffers: 2704 kB' 'Cached: 12725628 kB' 'SwapCached: 0 kB' 'Active: 9721680 kB' 'Inactive: 3491728 kB' 'Active(anon): 9333912 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488292 kB' 'Mapped: 204820 kB' 'Shmem: 8848836 kB' 'KReclaimable: 196576 kB' 'Slab: 559300 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362724 kB' 'KernelStack: 12656 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10414596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.399 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.400 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.660 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20425332 kB' 'MemUsed: 12451608 kB' 'SwapCached: 0 kB' 'Active: 5857664 kB' 'Inactive: 3354556 kB' 'Active(anon): 5589380 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060144 kB' 'Mapped: 92300 kB' 'AnonPages: 155244 kB' 'Shmem: 5437304 kB' 'KernelStack: 6968 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90140 kB' 'Slab: 302300 kB' 'SReclaimable: 90140 kB' 'SUnreclaim: 212160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.661 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.662 node0=1024 expecting 1024 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.662 19:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.595 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.595 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.857 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.857 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.857 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.857 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.857 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.857 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.857 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.857 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.857 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.857 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.857 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.857 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.857 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.857 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.857 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.857 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:50.857 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:50.857 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.857 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.857 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43344196 kB' 'MemAvailable: 46828140 kB' 'Buffers: 2704 kB' 'Cached: 12725696 kB' 'SwapCached: 0 kB' 'Active: 9723320 kB' 'Inactive: 3491728 kB' 'Active(anon): 9335552 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489916 kB' 'Mapped: 204948 kB' 'Shmem: 8848904 kB' 'KReclaimable: 196576 kB' 'Slab: 559084 kB' 'SReclaimable: 196576 kB' 'SUnreclaim: 362508 kB' 'KernelStack: 12832 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10416892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.858 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43343604 kB' 'MemAvailable: 46827532 kB' 'Buffers: 2704 kB' 'Cached: 12725704 kB' 'SwapCached: 0 kB' 'Active: 9724412 kB' 'Inactive: 3491728 kB' 'Active(anon): 9336644 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490964 kB' 'Mapped: 204920 kB' 'Shmem: 8848912 kB' 'KReclaimable: 196544 kB' 'Slab: 559052 kB' 'SReclaimable: 196544 kB' 'SUnreclaim: 362508 kB' 'KernelStack: 12976 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10417496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.859 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.860 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43345036 kB' 'MemAvailable: 46828964 kB' 'Buffers: 2704 kB' 'Cached: 12725724 kB' 'SwapCached: 0 kB' 'Active: 9723648 kB' 'Inactive: 3491728 kB' 'Active(anon): 9335880 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490260 kB' 'Mapped: 204932 kB' 'Shmem: 8848932 kB' 'KReclaimable: 196544 kB' 'Slab: 559164 kB' 'SReclaimable: 196544 kB' 'SUnreclaim: 362620 kB' 'KernelStack: 12928 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10417520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.861 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.862 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.863 nr_hugepages=1024 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.863 resv_hugepages=0 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.863 surplus_hugepages=0 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.863 anon_hugepages=0 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.863 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43343996 kB' 'MemAvailable: 46827924 kB' 'Buffers: 2704 kB' 'Cached: 12725724 kB' 'SwapCached: 0 kB' 'Active: 9723964 kB' 'Inactive: 3491728 kB' 'Active(anon): 9336196 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490576 kB' 'Mapped: 204932 kB' 'Shmem: 8848932 kB' 'KReclaimable: 196544 kB' 'Slab: 559156 kB' 'SReclaimable: 196544 kB' 'SUnreclaim: 362612 kB' 'KernelStack: 12928 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10415172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1730140 kB' 'DirectMap2M: 13918208 kB' 'DirectMap1G: 53477376 kB' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.864 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.124 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.125 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20413992 kB' 'MemUsed: 12462948 kB' 'SwapCached: 0 kB' 'Active: 5858452 kB' 'Inactive: 3354556 kB' 'Active(anon): 5590168 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060200 kB' 'Mapped: 92416 kB' 'AnonPages: 156056 kB' 'Shmem: 5437360 kB' 'KernelStack: 7048 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90108 kB' 'Slab: 302188 kB' 'SReclaimable: 90108 kB' 'SUnreclaim: 212080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.126 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.127 node0=1024 expecting 1024 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.127 00:04:51.127 real 0m2.893s 00:04:51.127 user 0m1.184s 00:04:51.127 sys 0m1.631s 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.127 19:35:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.127 ************************************ 00:04:51.127 END TEST no_shrink_alloc 00:04:51.127 ************************************ 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:51.127 19:35:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:51.127 00:04:51.127 real 0m11.474s 00:04:51.127 user 0m4.487s 00:04:51.127 sys 0m5.896s 00:04:51.127 19:35:00 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.127 19:35:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.127 ************************************ 00:04:51.127 END TEST hugepages 00:04:51.127 ************************************ 00:04:51.127 19:35:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:51.127 19:35:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.127 19:35:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.127 19:35:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.127 ************************************ 00:04:51.127 START TEST driver 00:04:51.127 ************************************ 00:04:51.127 19:35:00 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:51.127 * Looking for test storage... 00:04:51.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:51.127 19:35:00 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:51.127 19:35:00 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.127 19:35:00 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.658 19:35:02 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:53.658 19:35:02 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.658 19:35:02 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.658 19:35:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.658 ************************************ 00:04:53.658 START TEST guess_driver 00:04:53.658 ************************************ 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:53.658 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:53.658 Looking for driver=vfio-pci 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.658 19:35:02 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.032 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.033 19:35:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.968 19:35:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.503 00:04:58.503 real 0m4.720s 00:04:58.503 user 0m1.047s 00:04:58.503 sys 0m1.793s 00:04:58.503 19:35:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.503 19:35:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:58.503 ************************************ 00:04:58.503 END TEST guess_driver 00:04:58.503 ************************************ 00:04:58.503 00:04:58.503 real 0m7.274s 00:04:58.503 user 0m1.601s 00:04:58.503 sys 0m2.821s 00:04:58.503 19:35:07 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.503 19:35:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:58.503 ************************************ 00:04:58.503 END TEST driver 00:04:58.503 ************************************ 00:04:58.503 19:35:07 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:58.503 19:35:07 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.503 19:35:07 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.503 19:35:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.503 ************************************ 00:04:58.503 START TEST devices 00:04:58.503 ************************************ 00:04:58.503 19:35:07 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:58.503 * Looking for test storage... 00:04:58.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:58.503 19:35:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:58.503 19:35:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:58.503 19:35:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.503 19:35:07 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:59.907 19:35:09 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:59.907 19:35:09 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:59.907 No valid GPT data, bailing 00:04:59.907 19:35:09 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:59.907 19:35:09 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:59.907 19:35:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:59.907 19:35:09 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:59.907 19:35:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.168 19:35:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:00.168 19:35:09 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:00.168 19:35:09 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:00.168 19:35:09 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:00.168 19:35:09 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.168 19:35:09 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.168 19:35:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.168 ************************************ 00:05:00.168 START TEST nvme_mount 00:05:00.168 ************************************ 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.168 19:35:09 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:01.105 Creating new GPT entries in memory. 00:05:01.105 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.105 other utilities. 00:05:01.105 19:35:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.105 19:35:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.105 19:35:10 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.105 19:35:10 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.105 19:35:10 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.041 Creating new GPT entries in memory. 00:05:02.041 The operation has completed successfully. 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3833802 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.041 19:35:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:03.418 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.418 19:35:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.677 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:03.677 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:03.677 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:03.677 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:03.677 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:03.677 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:03.677 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.677 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:03.677 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:03.677 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.936 19:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.868 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.127 19:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.501 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.502 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.502 00:05:06.502 real 0m6.432s 00:05:06.502 user 0m1.541s 00:05:06.502 sys 0m2.495s 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.502 19:35:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:06.502 ************************************ 00:05:06.502 END TEST nvme_mount 00:05:06.502 ************************************ 00:05:06.502 19:35:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:06.502 19:35:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.502 19:35:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.502 19:35:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:06.502 ************************************ 00:05:06.502 START TEST dm_mount 00:05:06.502 ************************************ 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.502 19:35:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:07.437 Creating new GPT entries in memory. 00:05:07.437 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:07.437 other utilities. 00:05:07.437 19:35:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:07.437 19:35:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.437 19:35:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.437 19:35:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.437 19:35:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:08.813 Creating new GPT entries in memory. 00:05:08.813 The operation has completed successfully. 00:05:08.813 19:35:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:08.813 19:35:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.813 19:35:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.813 19:35:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.813 19:35:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:09.747 The operation has completed successfully. 00:05:09.747 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:09.747 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3836190 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.748 19:35:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.683 19:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.941 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.942 19:35:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.876 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:12.135 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:12.135 00:05:12.135 real 0m5.621s 00:05:12.135 user 0m0.925s 00:05:12.135 sys 0m1.548s 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.135 19:35:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:12.135 ************************************ 00:05:12.135 END TEST dm_mount 00:05:12.135 ************************************ 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.135 19:35:21 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.393 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:12.393 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:12.393 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.393 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.393 19:35:21 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:12.393 00:05:12.393 real 0m14.024s 00:05:12.393 user 0m3.155s 00:05:12.393 sys 0m5.092s 00:05:12.393 19:35:21 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.393 19:35:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:12.393 ************************************ 00:05:12.393 END TEST devices 00:05:12.393 ************************************ 00:05:12.393 00:05:12.393 real 0m43.462s 00:05:12.393 user 0m12.475s 00:05:12.393 sys 0m19.284s 00:05:12.393 19:35:21 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.393 19:35:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.393 ************************************ 00:05:12.393 END TEST setup.sh 00:05:12.393 ************************************ 00:05:12.393 19:35:21 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:13.766 Hugepages 00:05:13.766 node hugesize free / total 00:05:13.766 node0 1048576kB 0 / 0 00:05:13.766 node0 2048kB 2048 / 2048 00:05:13.766 node1 1048576kB 0 / 0 00:05:13.766 node1 2048kB 0 / 0 00:05:13.766 00:05:13.766 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.766 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:13.766 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:13.766 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:13.766 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:13.766 19:35:23 -- spdk/autotest.sh@130 -- # uname -s 00:05:13.766 19:35:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:13.766 19:35:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:13.766 19:35:23 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.148 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:15.148 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:15.148 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:15.715 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:15.974 19:35:25 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:16.911 19:35:26 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:16.911 19:35:26 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:16.911 19:35:26 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.911 19:35:26 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:16.911 19:35:26 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:16.911 19:35:26 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:16.911 19:35:26 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.911 19:35:26 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.911 19:35:26 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:17.169 19:35:26 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:17.169 19:35:26 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:17.169 19:35:26 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.107 Waiting for block devices as requested 00:05:18.107 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:18.107 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:18.365 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:18.365 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:18.365 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:18.625 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:18.625 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:18.625 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:18.625 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:18.884 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:18.884 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:18.884 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:18.884 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:19.142 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:19.142 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:19.142 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:19.142 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:19.401 19:35:28 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:19.401 19:35:28 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:19.401 19:35:28 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:19.401 19:35:28 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:19.401 19:35:28 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:19.401 19:35:28 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:19.401 19:35:28 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:19.401 19:35:28 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:19.401 19:35:28 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:19.401 19:35:28 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:19.401 19:35:28 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:19.401 19:35:28 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:19.401 19:35:28 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:19.401 19:35:28 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:19.401 19:35:28 -- common/autotest_common.sh@1553 -- # continue 00:05:19.401 19:35:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:19.401 19:35:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.401 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:19.401 19:35:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:19.401 19:35:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.401 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:19.401 19:35:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.777 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.777 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.777 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.752 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.752 19:35:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:21.752 19:35:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.752 19:35:31 -- common/autotest_common.sh@10 -- # set +x 00:05:21.752 19:35:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:21.752 19:35:31 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:21.752 19:35:31 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.752 19:35:31 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:21.752 19:35:31 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:21.752 19:35:31 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:21.752 19:35:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:21.752 19:35:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:21.752 19:35:31 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.752 19:35:31 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.752 19:35:31 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:21.752 19:35:31 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:21.752 19:35:31 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:21.752 19:35:31 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:21.752 19:35:31 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:21.752 19:35:31 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:21.752 19:35:31 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:21.752 19:35:31 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:21.752 19:35:31 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:21.752 19:35:31 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:21.752 19:35:31 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3841363 00:05:21.752 19:35:31 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.752 19:35:31 -- common/autotest_common.sh@1594 -- # waitforlisten 3841363 00:05:21.752 19:35:31 -- common/autotest_common.sh@827 -- # '[' -z 3841363 ']' 00:05:21.752 19:35:31 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.752 19:35:31 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.752 19:35:31 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.752 19:35:31 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.752 19:35:31 -- common/autotest_common.sh@10 -- # set +x 00:05:22.011 [2024-07-25 19:35:31.193929] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:22.011 [2024-07-25 19:35:31.194011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841363 ] 00:05:22.011 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.011 [2024-07-25 19:35:31.251405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.011 [2024-07-25 19:35:31.340737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.269 19:35:31 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.269 19:35:31 -- common/autotest_common.sh@860 -- # return 0 00:05:22.269 19:35:31 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:22.269 19:35:31 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:22.269 19:35:31 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:25.546 nvme0n1 00:05:25.546 19:35:34 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:25.546 [2024-07-25 19:35:34.902289] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:25.546 [2024-07-25 19:35:34.902327] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:25.546 request: 00:05:25.546 { 00:05:25.546 "nvme_ctrlr_name": "nvme0", 00:05:25.546 "password": "test", 00:05:25.546 "method": "bdev_nvme_opal_revert", 00:05:25.546 "req_id": 1 00:05:25.546 } 00:05:25.546 Got JSON-RPC error response 00:05:25.546 response: 00:05:25.546 { 00:05:25.546 "code": -32603, 00:05:25.546 "message": "Internal error" 00:05:25.546 } 00:05:25.546 19:35:34 -- common/autotest_common.sh@1600 -- # true 00:05:25.546 19:35:34 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:25.546 19:35:34 -- common/autotest_common.sh@1604 -- # killprocess 3841363 00:05:25.546 19:35:34 -- common/autotest_common.sh@946 -- # '[' -z 3841363 ']' 00:05:25.546 19:35:34 -- common/autotest_common.sh@950 -- # kill -0 3841363 00:05:25.546 19:35:34 -- common/autotest_common.sh@951 -- # uname 00:05:25.546 19:35:34 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.546 19:35:34 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3841363 00:05:25.546 19:35:34 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.546 19:35:34 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.546 19:35:34 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3841363' 00:05:25.546 killing process with pid 3841363 00:05:25.546 19:35:34 -- common/autotest_common.sh@965 -- # kill 3841363 00:05:25.546 19:35:34 -- common/autotest_common.sh@970 -- # wait 3841363 00:05:27.446 19:35:36 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:27.446 19:35:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:27.446 19:35:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:27.446 19:35:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:27.446 19:35:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:27.446 19:35:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.446 19:35:36 -- common/autotest_common.sh@10 -- # set +x 00:05:27.446 19:35:36 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:27.446 19:35:36 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:27.446 19:35:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.446 19:35:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.446 19:35:36 -- common/autotest_common.sh@10 -- # set +x 00:05:27.446 ************************************ 00:05:27.446 START TEST env 00:05:27.446 ************************************ 00:05:27.446 19:35:36 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:27.446 * Looking for test storage... 00:05:27.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:27.446 19:35:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:27.446 19:35:36 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.446 19:35:36 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.446 19:35:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.446 ************************************ 00:05:27.446 START TEST env_memory 00:05:27.446 ************************************ 00:05:27.446 19:35:36 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:27.446 00:05:27.446 00:05:27.446 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.446 http://cunit.sourceforge.net/ 00:05:27.446 00:05:27.446 00:05:27.446 Suite: memory 00:05:27.446 Test: alloc and free memory map ...[2024-07-25 19:35:36.823660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.446 passed 00:05:27.446 Test: mem map translation ...[2024-07-25 19:35:36.843774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.446 [2024-07-25 19:35:36.843795] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.446 [2024-07-25 19:35:36.843851] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.446 [2024-07-25 19:35:36.843864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.705 passed 00:05:27.705 Test: mem map registration ...[2024-07-25 19:35:36.884395] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:27.705 [2024-07-25 19:35:36.884414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:27.705 passed 00:05:27.705 Test: mem map adjacent registrations ...passed 00:05:27.705 00:05:27.705 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.705 suites 1 1 n/a 0 0 00:05:27.705 tests 4 4 4 0 0 00:05:27.705 asserts 152 152 152 0 n/a 00:05:27.705 00:05:27.705 Elapsed time = 0.141 seconds 00:05:27.705 00:05:27.705 real 0m0.150s 00:05:27.705 user 0m0.140s 00:05:27.705 sys 0m0.009s 00:05:27.705 19:35:36 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.705 19:35:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.705 ************************************ 00:05:27.705 END TEST env_memory 00:05:27.705 ************************************ 00:05:27.705 19:35:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.705 19:35:36 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.705 19:35:36 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.705 19:35:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.705 ************************************ 00:05:27.705 START TEST env_vtophys 00:05:27.705 ************************************ 00:05:27.705 19:35:36 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.705 EAL: lib.eal log level changed from notice to debug 00:05:27.705 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.705 EAL: Detected lcore 1 as core 1 on socket 0 00:05:27.705 EAL: Detected lcore 2 as core 2 on socket 0 00:05:27.705 EAL: Detected lcore 3 as core 3 on socket 0 00:05:27.705 EAL: Detected lcore 4 as core 4 on socket 0 00:05:27.705 EAL: Detected lcore 5 as core 5 on socket 0 00:05:27.705 EAL: Detected lcore 6 as core 8 on socket 0 00:05:27.705 EAL: Detected lcore 7 as core 9 on socket 0 00:05:27.705 EAL: Detected lcore 8 as core 10 on socket 0 00:05:27.705 EAL: Detected lcore 9 as core 11 on socket 0 00:05:27.705 EAL: Detected lcore 10 as core 12 on socket 0 00:05:27.705 EAL: Detected lcore 11 as core 13 on socket 0 00:05:27.705 EAL: Detected lcore 12 as core 0 on socket 1 00:05:27.705 EAL: Detected lcore 13 as core 1 on socket 1 00:05:27.705 EAL: Detected lcore 14 as core 2 on socket 1 00:05:27.705 EAL: Detected lcore 15 as core 3 on socket 1 00:05:27.705 EAL: Detected lcore 16 as core 4 on socket 1 00:05:27.705 EAL: Detected lcore 17 as core 5 on socket 1 00:05:27.705 EAL: Detected lcore 18 as core 8 on socket 1 00:05:27.705 EAL: Detected lcore 19 as core 9 on socket 1 00:05:27.705 EAL: Detected lcore 20 as core 10 on socket 1 00:05:27.705 EAL: Detected lcore 21 as core 11 on socket 1 00:05:27.705 EAL: Detected lcore 22 as core 12 on socket 1 00:05:27.705 EAL: Detected lcore 23 as core 13 on socket 1 00:05:27.705 EAL: Detected lcore 24 as core 0 on socket 0 00:05:27.705 EAL: Detected lcore 25 as core 1 on socket 0 00:05:27.705 EAL: Detected lcore 26 as core 2 on socket 0 00:05:27.705 EAL: Detected lcore 27 as core 3 on socket 0 00:05:27.705 EAL: Detected lcore 28 as core 4 on socket 0 00:05:27.705 EAL: Detected lcore 29 as core 5 on socket 0 00:05:27.705 EAL: Detected lcore 30 as core 8 on socket 0 00:05:27.705 EAL: Detected lcore 31 as core 9 on socket 0 00:05:27.705 EAL: Detected lcore 32 as core 10 on socket 0 00:05:27.705 EAL: Detected lcore 33 as core 11 on socket 0 00:05:27.705 EAL: Detected lcore 34 as core 12 on socket 0 00:05:27.705 EAL: Detected lcore 35 as core 13 on socket 0 00:05:27.705 EAL: Detected lcore 36 as core 0 on socket 1 00:05:27.705 EAL: Detected lcore 37 as core 1 on socket 1 00:05:27.705 EAL: Detected lcore 38 as core 2 on socket 1 00:05:27.705 EAL: Detected lcore 39 as core 3 on socket 1 00:05:27.705 EAL: Detected lcore 40 as core 4 on socket 1 00:05:27.705 EAL: Detected lcore 41 as core 5 on socket 1 00:05:27.705 EAL: Detected lcore 42 as core 8 on socket 1 00:05:27.705 EAL: Detected lcore 43 as core 9 on socket 1 00:05:27.705 EAL: Detected lcore 44 as core 10 on socket 1 00:05:27.705 EAL: Detected lcore 45 as core 11 on socket 1 00:05:27.705 EAL: Detected lcore 46 as core 12 on socket 1 00:05:27.705 EAL: Detected lcore 47 as core 13 on socket 1 00:05:27.705 EAL: Maximum logical cores by configuration: 128 00:05:27.705 EAL: Detected CPU lcores: 48 00:05:27.706 EAL: Detected NUMA nodes: 2 00:05:27.706 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:27.706 EAL: Detected shared linkage of DPDK 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:27.706 EAL: Registered [vdev] bus. 00:05:27.706 EAL: bus.vdev log level changed from disabled to notice 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:27.706 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.706 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:27.706 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:27.706 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Bus pci wants IOVA as 'DC' 00:05:27.706 EAL: Bus vdev wants IOVA as 'DC' 00:05:27.706 EAL: Buses did not request a specific IOVA mode. 00:05:27.706 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:27.706 EAL: Selected IOVA mode 'VA' 00:05:27.706 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.706 EAL: Probing VFIO support... 00:05:27.706 EAL: IOMMU type 1 (Type 1) is supported 00:05:27.706 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:27.706 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:27.706 EAL: VFIO support initialized 00:05:27.706 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.706 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.706 EAL: Setting up physically contiguous memory... 00:05:27.706 EAL: Setting maximum number of open files to 524288 00:05:27.706 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.706 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:27.706 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.706 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:27.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.706 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:27.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.706 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:27.706 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:27.706 EAL: Hugepages will be freed exactly as allocated. 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: TSC frequency is ~2700000 KHz 00:05:27.706 EAL: Main lcore 0 is ready (tid=7f9e1e170a00;cpuset=[0]) 00:05:27.706 EAL: Trying to obtain current memory policy. 00:05:27.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.706 EAL: Restoring previous memory policy: 0 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.706 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.706 00:05:27.706 00:05:27.706 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.706 http://cunit.sourceforge.net/ 00:05:27.706 00:05:27.706 00:05:27.706 Suite: components_suite 00:05:27.706 Test: vtophys_malloc_test ...passed 00:05:27.706 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.706 EAL: Restoring previous memory policy: 4 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.706 EAL: Trying to obtain current memory policy. 00:05:27.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.706 EAL: Restoring previous memory policy: 4 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.706 EAL: Trying to obtain current memory policy. 00:05:27.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.706 EAL: Restoring previous memory policy: 4 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.706 EAL: Trying to obtain current memory policy. 00:05:27.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.706 EAL: Restoring previous memory policy: 4 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.706 EAL: request: mp_malloc_sync 00:05:27.706 EAL: No shared files mode enabled, IPC is disabled 00:05:27.706 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.707 EAL: Trying to obtain current memory policy. 00:05:27.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.707 EAL: Restoring previous memory policy: 4 00:05:27.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.707 EAL: request: mp_malloc_sync 00:05:27.707 EAL: No shared files mode enabled, IPC is disabled 00:05:27.707 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.707 EAL: request: mp_malloc_sync 00:05:27.707 EAL: No shared files mode enabled, IPC is disabled 00:05:27.707 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.707 EAL: Trying to obtain current memory policy. 00:05:27.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.707 EAL: Restoring previous memory policy: 4 00:05:27.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.707 EAL: request: mp_malloc_sync 00:05:27.707 EAL: No shared files mode enabled, IPC is disabled 00:05:27.707 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.707 EAL: request: mp_malloc_sync 00:05:27.707 EAL: No shared files mode enabled, IPC is disabled 00:05:27.707 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.707 EAL: Trying to obtain current memory policy. 00:05:27.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.965 EAL: Restoring previous memory policy: 4 00:05:27.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.965 EAL: request: mp_malloc_sync 00:05:27.965 EAL: No shared files mode enabled, IPC is disabled 00:05:27.965 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.965 EAL: request: mp_malloc_sync 00:05:27.965 EAL: No shared files mode enabled, IPC is disabled 00:05:27.965 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.965 EAL: Trying to obtain current memory policy. 00:05:27.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.965 EAL: Restoring previous memory policy: 4 00:05:27.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.965 EAL: request: mp_malloc_sync 00:05:27.965 EAL: No shared files mode enabled, IPC is disabled 00:05:27.965 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.223 EAL: request: mp_malloc_sync 00:05:28.223 EAL: No shared files mode enabled, IPC is disabled 00:05:28.223 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.223 EAL: Trying to obtain current memory policy. 00:05:28.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.223 EAL: Restoring previous memory policy: 4 00:05:28.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.223 EAL: request: mp_malloc_sync 00:05:28.223 EAL: No shared files mode enabled, IPC is disabled 00:05:28.223 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.481 EAL: request: mp_malloc_sync 00:05:28.481 EAL: No shared files mode enabled, IPC is disabled 00:05:28.481 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.481 EAL: Trying to obtain current memory policy. 00:05:28.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.739 EAL: Restoring previous memory policy: 4 00:05:28.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.739 EAL: request: mp_malloc_sync 00:05:28.739 EAL: No shared files mode enabled, IPC is disabled 00:05:28.739 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.996 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.255 EAL: request: mp_malloc_sync 00:05:29.255 EAL: No shared files mode enabled, IPC is disabled 00:05:29.255 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.255 passed 00:05:29.255 00:05:29.255 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.255 suites 1 1 n/a 0 0 00:05:29.255 tests 2 2 2 0 0 00:05:29.255 asserts 497 497 497 0 n/a 00:05:29.255 00:05:29.255 Elapsed time = 1.391 seconds 00:05:29.255 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.255 EAL: request: mp_malloc_sync 00:05:29.255 EAL: No shared files mode enabled, IPC is disabled 00:05:29.255 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.255 EAL: No shared files mode enabled, IPC is disabled 00:05:29.255 EAL: No shared files mode enabled, IPC is disabled 00:05:29.255 EAL: No shared files mode enabled, IPC is disabled 00:05:29.255 00:05:29.255 real 0m1.517s 00:05:29.255 user 0m0.881s 00:05:29.255 sys 0m0.595s 00:05:29.255 19:35:38 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.255 19:35:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.255 ************************************ 00:05:29.255 END TEST env_vtophys 00:05:29.255 ************************************ 00:05:29.255 19:35:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:29.255 19:35:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.255 19:35:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.255 19:35:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.255 ************************************ 00:05:29.255 START TEST env_pci 00:05:29.255 ************************************ 00:05:29.255 19:35:38 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:29.255 00:05:29.255 00:05:29.255 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.255 http://cunit.sourceforge.net/ 00:05:29.255 00:05:29.255 00:05:29.255 Suite: pci 00:05:29.255 Test: pci_hook ...[2024-07-25 19:35:38.559248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3842257 has claimed it 00:05:29.255 EAL: Cannot find device (10000:00:01.0) 00:05:29.255 EAL: Failed to attach device on primary process 00:05:29.255 passed 00:05:29.255 00:05:29.255 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.255 suites 1 1 n/a 0 0 00:05:29.255 tests 1 1 1 0 0 00:05:29.255 asserts 25 25 25 0 n/a 00:05:29.255 00:05:29.255 Elapsed time = 0.020 seconds 00:05:29.255 00:05:29.255 real 0m0.033s 00:05:29.255 user 0m0.008s 00:05:29.255 sys 0m0.025s 00:05:29.255 19:35:38 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.255 19:35:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.255 ************************************ 00:05:29.255 END TEST env_pci 00:05:29.255 ************************************ 00:05:29.255 19:35:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.255 19:35:38 env -- env/env.sh@15 -- # uname 00:05:29.255 19:35:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.255 19:35:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.255 19:35:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.255 19:35:38 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:29.255 19:35:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.255 19:35:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.255 ************************************ 00:05:29.255 START TEST env_dpdk_post_init 00:05:29.255 ************************************ 00:05:29.255 19:35:38 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.255 EAL: Detected CPU lcores: 48 00:05:29.255 EAL: Detected NUMA nodes: 2 00:05:29.255 EAL: Detected shared linkage of DPDK 00:05:29.255 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.255 EAL: Selected IOVA mode 'VA' 00:05:29.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.255 EAL: VFIO support initialized 00:05:29.255 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.513 EAL: Using IOMMU type 1 (Type 1) 00:05:29.513 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:29.514 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:30.447 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:33.727 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:33.727 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:33.727 Starting DPDK initialization... 00:05:33.727 Starting SPDK post initialization... 00:05:33.727 SPDK NVMe probe 00:05:33.727 Attaching to 0000:88:00.0 00:05:33.727 Attached to 0000:88:00.0 00:05:33.727 Cleaning up... 00:05:33.727 00:05:33.727 real 0m4.398s 00:05:33.727 user 0m3.245s 00:05:33.727 sys 0m0.211s 00:05:33.727 19:35:43 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.727 19:35:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.727 ************************************ 00:05:33.727 END TEST env_dpdk_post_init 00:05:33.727 ************************************ 00:05:33.727 19:35:43 env -- env/env.sh@26 -- # uname 00:05:33.727 19:35:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.727 19:35:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.727 19:35:43 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.727 19:35:43 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.727 19:35:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.727 ************************************ 00:05:33.727 START TEST env_mem_callbacks 00:05:33.727 ************************************ 00:05:33.727 19:35:43 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.727 EAL: Detected CPU lcores: 48 00:05:33.727 EAL: Detected NUMA nodes: 2 00:05:33.727 EAL: Detected shared linkage of DPDK 00:05:33.727 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.727 EAL: Selected IOVA mode 'VA' 00:05:33.727 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.727 EAL: VFIO support initialized 00:05:33.727 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.727 00:05:33.727 00:05:33.727 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.727 http://cunit.sourceforge.net/ 00:05:33.727 00:05:33.727 00:05:33.727 Suite: memory 00:05:33.727 Test: test ... 00:05:33.727 register 0x200000200000 2097152 00:05:33.727 malloc 3145728 00:05:33.727 register 0x200000400000 4194304 00:05:33.727 buf 0x200000500000 len 3145728 PASSED 00:05:33.727 malloc 64 00:05:33.727 buf 0x2000004fff40 len 64 PASSED 00:05:33.727 malloc 4194304 00:05:33.727 register 0x200000800000 6291456 00:05:33.727 buf 0x200000a00000 len 4194304 PASSED 00:05:33.727 free 0x200000500000 3145728 00:05:33.727 free 0x2000004fff40 64 00:05:33.727 unregister 0x200000400000 4194304 PASSED 00:05:33.727 free 0x200000a00000 4194304 00:05:33.727 unregister 0x200000800000 6291456 PASSED 00:05:33.727 malloc 8388608 00:05:33.727 register 0x200000400000 10485760 00:05:33.727 buf 0x200000600000 len 8388608 PASSED 00:05:33.727 free 0x200000600000 8388608 00:05:33.727 unregister 0x200000400000 10485760 PASSED 00:05:33.727 passed 00:05:33.727 00:05:33.727 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.727 suites 1 1 n/a 0 0 00:05:33.727 tests 1 1 1 0 0 00:05:33.727 asserts 15 15 15 0 n/a 00:05:33.727 00:05:33.727 Elapsed time = 0.005 seconds 00:05:33.727 00:05:33.727 real 0m0.048s 00:05:33.727 user 0m0.014s 00:05:33.727 sys 0m0.033s 00:05:33.727 19:35:43 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.727 19:35:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.727 ************************************ 00:05:33.727 END TEST env_mem_callbacks 00:05:33.727 ************************************ 00:05:33.727 00:05:33.728 real 0m6.435s 00:05:33.728 user 0m4.419s 00:05:33.728 sys 0m1.050s 00:05:33.728 19:35:43 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.728 19:35:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.728 ************************************ 00:05:33.728 END TEST env 00:05:33.728 ************************************ 00:05:33.986 19:35:43 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:33.986 19:35:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.986 19:35:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.986 19:35:43 -- common/autotest_common.sh@10 -- # set +x 00:05:33.986 ************************************ 00:05:33.986 START TEST rpc 00:05:33.986 ************************************ 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:33.986 * Looking for test storage... 00:05:33.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:33.986 19:35:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3842909 00:05:33.986 19:35:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:33.986 19:35:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.986 19:35:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3842909 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@827 -- # '[' -z 3842909 ']' 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.986 19:35:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.986 [2024-07-25 19:35:43.291607] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:33.986 [2024-07-25 19:35:43.291694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842909 ] 00:05:33.986 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.986 [2024-07-25 19:35:43.349307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.244 [2024-07-25 19:35:43.435370] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.244 [2024-07-25 19:35:43.435421] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3842909' to capture a snapshot of events at runtime. 00:05:34.244 [2024-07-25 19:35:43.435447] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.244 [2024-07-25 19:35:43.435458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.244 [2024-07-25 19:35:43.435467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3842909 for offline analysis/debug. 00:05:34.244 [2024-07-25 19:35:43.435493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.501 19:35:43 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.501 19:35:43 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:34.501 19:35:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:34.501 19:35:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:34.501 19:35:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.501 19:35:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.501 19:35:43 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.501 19:35:43 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.501 19:35:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 ************************************ 00:05:34.501 START TEST rpc_integrity 00:05:34.501 ************************************ 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.501 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.501 { 00:05:34.501 "name": "Malloc0", 00:05:34.501 "aliases": [ 00:05:34.501 "b4e411ad-3b44-4620-a41f-40bf4d52fe04" 00:05:34.501 ], 00:05:34.501 "product_name": "Malloc disk", 00:05:34.501 "block_size": 512, 00:05:34.501 "num_blocks": 16384, 00:05:34.502 "uuid": "b4e411ad-3b44-4620-a41f-40bf4d52fe04", 00:05:34.502 "assigned_rate_limits": { 00:05:34.502 "rw_ios_per_sec": 0, 00:05:34.502 "rw_mbytes_per_sec": 0, 00:05:34.502 "r_mbytes_per_sec": 0, 00:05:34.502 "w_mbytes_per_sec": 0 00:05:34.502 }, 00:05:34.502 "claimed": false, 00:05:34.502 "zoned": false, 00:05:34.502 "supported_io_types": { 00:05:34.502 "read": true, 00:05:34.502 "write": true, 00:05:34.502 "unmap": true, 00:05:34.502 "write_zeroes": true, 00:05:34.502 "flush": true, 00:05:34.502 "reset": true, 00:05:34.502 "compare": false, 00:05:34.502 "compare_and_write": false, 00:05:34.502 "abort": true, 00:05:34.502 "nvme_admin": false, 00:05:34.502 "nvme_io": false 00:05:34.502 }, 00:05:34.502 "memory_domains": [ 00:05:34.502 { 00:05:34.502 "dma_device_id": "system", 00:05:34.502 "dma_device_type": 1 00:05:34.502 }, 00:05:34.502 { 00:05:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.502 "dma_device_type": 2 00:05:34.502 } 00:05:34.502 ], 00:05:34.502 "driver_specific": {} 00:05:34.502 } 00:05:34.502 ]' 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 [2024-07-25 19:35:43.818577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.502 [2024-07-25 19:35:43.818623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.502 [2024-07-25 19:35:43.818648] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb8bd60 00:05:34.502 [2024-07-25 19:35:43.818671] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.502 [2024-07-25 19:35:43.820200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.502 [2024-07-25 19:35:43.820227] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.502 Passthru0 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.502 { 00:05:34.502 "name": "Malloc0", 00:05:34.502 "aliases": [ 00:05:34.502 "b4e411ad-3b44-4620-a41f-40bf4d52fe04" 00:05:34.502 ], 00:05:34.502 "product_name": "Malloc disk", 00:05:34.502 "block_size": 512, 00:05:34.502 "num_blocks": 16384, 00:05:34.502 "uuid": "b4e411ad-3b44-4620-a41f-40bf4d52fe04", 00:05:34.502 "assigned_rate_limits": { 00:05:34.502 "rw_ios_per_sec": 0, 00:05:34.502 "rw_mbytes_per_sec": 0, 00:05:34.502 "r_mbytes_per_sec": 0, 00:05:34.502 "w_mbytes_per_sec": 0 00:05:34.502 }, 00:05:34.502 "claimed": true, 00:05:34.502 "claim_type": "exclusive_write", 00:05:34.502 "zoned": false, 00:05:34.502 "supported_io_types": { 00:05:34.502 "read": true, 00:05:34.502 "write": true, 00:05:34.502 "unmap": true, 00:05:34.502 "write_zeroes": true, 00:05:34.502 "flush": true, 00:05:34.502 "reset": true, 00:05:34.502 "compare": false, 00:05:34.502 "compare_and_write": false, 00:05:34.502 "abort": true, 00:05:34.502 "nvme_admin": false, 00:05:34.502 "nvme_io": false 00:05:34.502 }, 00:05:34.502 "memory_domains": [ 00:05:34.502 { 00:05:34.502 "dma_device_id": "system", 00:05:34.502 "dma_device_type": 1 00:05:34.502 }, 00:05:34.502 { 00:05:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.502 "dma_device_type": 2 00:05:34.502 } 00:05:34.502 ], 00:05:34.502 "driver_specific": {} 00:05:34.502 }, 00:05:34.502 { 00:05:34.502 "name": "Passthru0", 00:05:34.502 "aliases": [ 00:05:34.502 "71f738e5-e185-577a-a197-7fe6ecd03eb2" 00:05:34.502 ], 00:05:34.502 "product_name": "passthru", 00:05:34.502 "block_size": 512, 00:05:34.502 "num_blocks": 16384, 00:05:34.502 "uuid": "71f738e5-e185-577a-a197-7fe6ecd03eb2", 00:05:34.502 "assigned_rate_limits": { 00:05:34.502 "rw_ios_per_sec": 0, 00:05:34.502 "rw_mbytes_per_sec": 0, 00:05:34.502 "r_mbytes_per_sec": 0, 00:05:34.502 "w_mbytes_per_sec": 0 00:05:34.502 }, 00:05:34.502 "claimed": false, 00:05:34.502 "zoned": false, 00:05:34.502 "supported_io_types": { 00:05:34.502 "read": true, 00:05:34.502 "write": true, 00:05:34.502 "unmap": true, 00:05:34.502 "write_zeroes": true, 00:05:34.502 "flush": true, 00:05:34.502 "reset": true, 00:05:34.502 "compare": false, 00:05:34.502 "compare_and_write": false, 00:05:34.502 "abort": true, 00:05:34.502 "nvme_admin": false, 00:05:34.502 "nvme_io": false 00:05:34.502 }, 00:05:34.502 "memory_domains": [ 00:05:34.502 { 00:05:34.502 "dma_device_id": "system", 00:05:34.502 "dma_device_type": 1 00:05:34.502 }, 00:05:34.502 { 00:05:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.502 "dma_device_type": 2 00:05:34.502 } 00:05:34.502 ], 00:05:34.502 "driver_specific": { 00:05:34.502 "passthru": { 00:05:34.502 "name": "Passthru0", 00:05:34.502 "base_bdev_name": "Malloc0" 00:05:34.502 } 00:05:34.502 } 00:05:34.502 } 00:05:34.502 ]' 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.502 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.760 19:35:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.760 00:05:34.760 real 0m0.227s 00:05:34.760 user 0m0.150s 00:05:34.760 sys 0m0.020s 00:05:34.760 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.760 19:35:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 ************************************ 00:05:34.760 END TEST rpc_integrity 00:05:34.760 ************************************ 00:05:34.760 19:35:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.760 19:35:43 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.760 19:35:43 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.760 19:35:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 ************************************ 00:05:34.760 START TEST rpc_plugins 00:05:34.760 ************************************ 00:05:34.760 19:35:43 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:34.760 19:35:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.760 19:35:43 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.760 19:35:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 19:35:43 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.760 19:35:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.760 19:35:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.760 19:35:43 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.760 19:35:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.760 { 00:05:34.760 "name": "Malloc1", 00:05:34.760 "aliases": [ 00:05:34.760 "ffb61e61-72bd-4f1f-bc6a-19d966601aa7" 00:05:34.760 ], 00:05:34.760 "product_name": "Malloc disk", 00:05:34.760 "block_size": 4096, 00:05:34.760 "num_blocks": 256, 00:05:34.760 "uuid": "ffb61e61-72bd-4f1f-bc6a-19d966601aa7", 00:05:34.760 "assigned_rate_limits": { 00:05:34.760 "rw_ios_per_sec": 0, 00:05:34.760 "rw_mbytes_per_sec": 0, 00:05:34.760 "r_mbytes_per_sec": 0, 00:05:34.760 "w_mbytes_per_sec": 0 00:05:34.760 }, 00:05:34.760 "claimed": false, 00:05:34.760 "zoned": false, 00:05:34.760 "supported_io_types": { 00:05:34.760 "read": true, 00:05:34.760 "write": true, 00:05:34.760 "unmap": true, 00:05:34.760 "write_zeroes": true, 00:05:34.760 "flush": true, 00:05:34.760 "reset": true, 00:05:34.760 "compare": false, 00:05:34.760 "compare_and_write": false, 00:05:34.760 "abort": true, 00:05:34.760 "nvme_admin": false, 00:05:34.760 "nvme_io": false 00:05:34.760 }, 00:05:34.760 "memory_domains": [ 00:05:34.760 { 00:05:34.760 "dma_device_id": "system", 00:05:34.760 "dma_device_type": 1 00:05:34.760 }, 00:05:34.760 { 00:05:34.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.760 "dma_device_type": 2 00:05:34.760 } 00:05:34.760 ], 00:05:34.760 "driver_specific": {} 00:05:34.760 } 00:05:34.760 ]' 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.760 19:35:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.760 00:05:34.760 real 0m0.111s 00:05:34.760 user 0m0.077s 00:05:34.760 sys 0m0.007s 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.760 19:35:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 ************************************ 00:05:34.760 END TEST rpc_plugins 00:05:34.760 ************************************ 00:05:34.760 19:35:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:34.760 19:35:44 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.760 19:35:44 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.760 19:35:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 ************************************ 00:05:34.760 START TEST rpc_trace_cmd_test 00:05:34.760 ************************************ 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:34.760 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3842909", 00:05:34.760 "tpoint_group_mask": "0x8", 00:05:34.760 "iscsi_conn": { 00:05:34.760 "mask": "0x2", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "scsi": { 00:05:34.760 "mask": "0x4", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "bdev": { 00:05:34.760 "mask": "0x8", 00:05:34.760 "tpoint_mask": "0xffffffffffffffff" 00:05:34.760 }, 00:05:34.760 "nvmf_rdma": { 00:05:34.760 "mask": "0x10", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "nvmf_tcp": { 00:05:34.760 "mask": "0x20", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "ftl": { 00:05:34.760 "mask": "0x40", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "blobfs": { 00:05:34.760 "mask": "0x80", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "dsa": { 00:05:34.760 "mask": "0x200", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "thread": { 00:05:34.760 "mask": "0x400", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "nvme_pcie": { 00:05:34.760 "mask": "0x800", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "iaa": { 00:05:34.760 "mask": "0x1000", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "nvme_tcp": { 00:05:34.760 "mask": "0x2000", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "bdev_nvme": { 00:05:34.760 "mask": "0x4000", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 }, 00:05:34.760 "sock": { 00:05:34.760 "mask": "0x8000", 00:05:34.760 "tpoint_mask": "0x0" 00:05:34.760 } 00:05:34.760 }' 00:05:34.760 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.018 00:05:35.018 real 0m0.194s 00:05:35.018 user 0m0.173s 00:05:35.018 sys 0m0.014s 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.018 19:35:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.018 ************************************ 00:05:35.018 END TEST rpc_trace_cmd_test 00:05:35.018 ************************************ 00:05:35.018 19:35:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.018 19:35:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.018 19:35:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.018 19:35:44 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.018 19:35:44 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.018 19:35:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.019 ************************************ 00:05:35.019 START TEST rpc_daemon_integrity 00:05:35.019 ************************************ 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.019 { 00:05:35.019 "name": "Malloc2", 00:05:35.019 "aliases": [ 00:05:35.019 "10692b34-ad13-4ad2-9376-07a51440fba2" 00:05:35.019 ], 00:05:35.019 "product_name": "Malloc disk", 00:05:35.019 "block_size": 512, 00:05:35.019 "num_blocks": 16384, 00:05:35.019 "uuid": "10692b34-ad13-4ad2-9376-07a51440fba2", 00:05:35.019 "assigned_rate_limits": { 00:05:35.019 "rw_ios_per_sec": 0, 00:05:35.019 "rw_mbytes_per_sec": 0, 00:05:35.019 "r_mbytes_per_sec": 0, 00:05:35.019 "w_mbytes_per_sec": 0 00:05:35.019 }, 00:05:35.019 "claimed": false, 00:05:35.019 "zoned": false, 00:05:35.019 "supported_io_types": { 00:05:35.019 "read": true, 00:05:35.019 "write": true, 00:05:35.019 "unmap": true, 00:05:35.019 "write_zeroes": true, 00:05:35.019 "flush": true, 00:05:35.019 "reset": true, 00:05:35.019 "compare": false, 00:05:35.019 "compare_and_write": false, 00:05:35.019 "abort": true, 00:05:35.019 "nvme_admin": false, 00:05:35.019 "nvme_io": false 00:05:35.019 }, 00:05:35.019 "memory_domains": [ 00:05:35.019 { 00:05:35.019 "dma_device_id": "system", 00:05:35.019 "dma_device_type": 1 00:05:35.019 }, 00:05:35.019 { 00:05:35.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.019 "dma_device_type": 2 00:05:35.019 } 00:05:35.019 ], 00:05:35.019 "driver_specific": {} 00:05:35.019 } 00:05:35.019 ]' 00:05:35.019 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 [2024-07-25 19:35:44.484501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.277 [2024-07-25 19:35:44.484546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.277 [2024-07-25 19:35:44.484570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd3d420 00:05:35.277 [2024-07-25 19:35:44.484586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.277 [2024-07-25 19:35:44.485925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.277 [2024-07-25 19:35:44.485953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.277 Passthru0 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.277 { 00:05:35.277 "name": "Malloc2", 00:05:35.277 "aliases": [ 00:05:35.277 "10692b34-ad13-4ad2-9376-07a51440fba2" 00:05:35.277 ], 00:05:35.277 "product_name": "Malloc disk", 00:05:35.277 "block_size": 512, 00:05:35.277 "num_blocks": 16384, 00:05:35.277 "uuid": "10692b34-ad13-4ad2-9376-07a51440fba2", 00:05:35.277 "assigned_rate_limits": { 00:05:35.277 "rw_ios_per_sec": 0, 00:05:35.277 "rw_mbytes_per_sec": 0, 00:05:35.277 "r_mbytes_per_sec": 0, 00:05:35.277 "w_mbytes_per_sec": 0 00:05:35.277 }, 00:05:35.277 "claimed": true, 00:05:35.277 "claim_type": "exclusive_write", 00:05:35.277 "zoned": false, 00:05:35.277 "supported_io_types": { 00:05:35.277 "read": true, 00:05:35.277 "write": true, 00:05:35.277 "unmap": true, 00:05:35.277 "write_zeroes": true, 00:05:35.277 "flush": true, 00:05:35.277 "reset": true, 00:05:35.277 "compare": false, 00:05:35.277 "compare_and_write": false, 00:05:35.277 "abort": true, 00:05:35.277 "nvme_admin": false, 00:05:35.277 "nvme_io": false 00:05:35.277 }, 00:05:35.277 "memory_domains": [ 00:05:35.277 { 00:05:35.277 "dma_device_id": "system", 00:05:35.277 "dma_device_type": 1 00:05:35.277 }, 00:05:35.277 { 00:05:35.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.277 "dma_device_type": 2 00:05:35.277 } 00:05:35.277 ], 00:05:35.277 "driver_specific": {} 00:05:35.277 }, 00:05:35.277 { 00:05:35.277 "name": "Passthru0", 00:05:35.277 "aliases": [ 00:05:35.277 "513edcd8-18fa-5e5e-adee-b1a98b57bb28" 00:05:35.277 ], 00:05:35.277 "product_name": "passthru", 00:05:35.277 "block_size": 512, 00:05:35.277 "num_blocks": 16384, 00:05:35.277 "uuid": "513edcd8-18fa-5e5e-adee-b1a98b57bb28", 00:05:35.277 "assigned_rate_limits": { 00:05:35.277 "rw_ios_per_sec": 0, 00:05:35.277 "rw_mbytes_per_sec": 0, 00:05:35.277 "r_mbytes_per_sec": 0, 00:05:35.277 "w_mbytes_per_sec": 0 00:05:35.277 }, 00:05:35.277 "claimed": false, 00:05:35.277 "zoned": false, 00:05:35.277 "supported_io_types": { 00:05:35.277 "read": true, 00:05:35.277 "write": true, 00:05:35.277 "unmap": true, 00:05:35.277 "write_zeroes": true, 00:05:35.277 "flush": true, 00:05:35.277 "reset": true, 00:05:35.277 "compare": false, 00:05:35.277 "compare_and_write": false, 00:05:35.277 "abort": true, 00:05:35.277 "nvme_admin": false, 00:05:35.277 "nvme_io": false 00:05:35.277 }, 00:05:35.277 "memory_domains": [ 00:05:35.277 { 00:05:35.277 "dma_device_id": "system", 00:05:35.277 "dma_device_type": 1 00:05:35.277 }, 00:05:35.277 { 00:05:35.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.277 "dma_device_type": 2 00:05:35.277 } 00:05:35.277 ], 00:05:35.277 "driver_specific": { 00:05:35.277 "passthru": { 00:05:35.277 "name": "Passthru0", 00:05:35.277 "base_bdev_name": "Malloc2" 00:05:35.277 } 00:05:35.277 } 00:05:35.277 } 00:05:35.277 ]' 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.277 00:05:35.277 real 0m0.222s 00:05:35.277 user 0m0.151s 00:05:35.277 sys 0m0.020s 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.277 19:35:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 ************************************ 00:05:35.277 END TEST rpc_daemon_integrity 00:05:35.277 ************************************ 00:05:35.277 19:35:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.277 19:35:44 rpc -- rpc/rpc.sh@84 -- # killprocess 3842909 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@946 -- # '[' -z 3842909 ']' 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@950 -- # kill -0 3842909 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@951 -- # uname 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3842909 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3842909' 00:05:35.277 killing process with pid 3842909 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@965 -- # kill 3842909 00:05:35.277 19:35:44 rpc -- common/autotest_common.sh@970 -- # wait 3842909 00:05:35.843 00:05:35.843 real 0m1.857s 00:05:35.843 user 0m2.355s 00:05:35.843 sys 0m0.562s 00:05:35.843 19:35:45 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.843 19:35:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.843 ************************************ 00:05:35.843 END TEST rpc 00:05:35.843 ************************************ 00:05:35.843 19:35:45 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:35.843 19:35:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.843 19:35:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.843 19:35:45 -- common/autotest_common.sh@10 -- # set +x 00:05:35.843 ************************************ 00:05:35.843 START TEST skip_rpc 00:05:35.843 ************************************ 00:05:35.843 19:35:45 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:35.843 * Looking for test storage... 00:05:35.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.843 19:35:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.843 19:35:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.843 19:35:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:35.843 19:35:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.843 19:35:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.843 19:35:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.843 ************************************ 00:05:35.843 START TEST skip_rpc 00:05:35.843 ************************************ 00:05:35.843 19:35:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:35.843 19:35:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3843346 00:05:35.843 19:35:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:35.843 19:35:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.843 19:35:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:35.843 [2024-07-25 19:35:45.229266] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:35.843 [2024-07-25 19:35:45.229329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843346 ] 00:05:35.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.102 [2024-07-25 19:35:45.288727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.102 [2024-07-25 19:35:45.378423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3843346 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3843346 ']' 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3843346 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3843346 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3843346' 00:05:41.398 killing process with pid 3843346 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3843346 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3843346 00:05:41.398 00:05:41.398 real 0m5.436s 00:05:41.398 user 0m5.130s 00:05:41.398 sys 0m0.310s 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.398 19:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 ************************************ 00:05:41.398 END TEST skip_rpc 00:05:41.398 ************************************ 00:05:41.398 19:35:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:41.398 19:35:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.398 19:35:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.398 19:35:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 ************************************ 00:05:41.398 START TEST skip_rpc_with_json 00:05:41.398 ************************************ 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3844039 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3844039 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3844039 ']' 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.398 19:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 [2024-07-25 19:35:50.719139] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:41.398 [2024-07-25 19:35:50.719229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844039 ] 00:05:41.398 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.398 [2024-07-25 19:35:50.777939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.655 [2024-07-25 19:35:50.864511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 [2024-07-25 19:35:51.122298] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:41.913 request: 00:05:41.913 { 00:05:41.913 "trtype": "tcp", 00:05:41.913 "method": "nvmf_get_transports", 00:05:41.913 "req_id": 1 00:05:41.913 } 00:05:41.913 Got JSON-RPC error response 00:05:41.913 response: 00:05:41.913 { 00:05:41.913 "code": -19, 00:05:41.913 "message": "No such device" 00:05:41.913 } 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 [2024-07-25 19:35:51.130428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.913 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.913 { 00:05:41.913 "subsystems": [ 00:05:41.913 { 00:05:41.913 "subsystem": "vfio_user_target", 00:05:41.913 "config": null 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "subsystem": "keyring", 00:05:41.913 "config": [] 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "subsystem": "iobuf", 00:05:41.913 "config": [ 00:05:41.913 { 00:05:41.913 "method": "iobuf_set_options", 00:05:41.913 "params": { 00:05:41.913 "small_pool_count": 8192, 00:05:41.913 "large_pool_count": 1024, 00:05:41.913 "small_bufsize": 8192, 00:05:41.913 "large_bufsize": 135168 00:05:41.913 } 00:05:41.913 } 00:05:41.913 ] 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "subsystem": "sock", 00:05:41.913 "config": [ 00:05:41.913 { 00:05:41.913 "method": "sock_set_default_impl", 00:05:41.913 "params": { 00:05:41.913 "impl_name": "posix" 00:05:41.913 } 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "method": "sock_impl_set_options", 00:05:41.913 "params": { 00:05:41.913 "impl_name": "ssl", 00:05:41.913 "recv_buf_size": 4096, 00:05:41.913 "send_buf_size": 4096, 00:05:41.913 "enable_recv_pipe": true, 00:05:41.913 "enable_quickack": false, 00:05:41.913 "enable_placement_id": 0, 00:05:41.913 "enable_zerocopy_send_server": true, 00:05:41.913 "enable_zerocopy_send_client": false, 00:05:41.913 "zerocopy_threshold": 0, 00:05:41.913 "tls_version": 0, 00:05:41.913 "enable_ktls": false 00:05:41.913 } 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "method": "sock_impl_set_options", 00:05:41.913 "params": { 00:05:41.913 "impl_name": "posix", 00:05:41.913 "recv_buf_size": 2097152, 00:05:41.913 "send_buf_size": 2097152, 00:05:41.913 "enable_recv_pipe": true, 00:05:41.913 "enable_quickack": false, 00:05:41.913 "enable_placement_id": 0, 00:05:41.913 "enable_zerocopy_send_server": true, 00:05:41.913 "enable_zerocopy_send_client": false, 00:05:41.913 "zerocopy_threshold": 0, 00:05:41.913 "tls_version": 0, 00:05:41.913 "enable_ktls": false 00:05:41.913 } 00:05:41.913 } 00:05:41.913 ] 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "subsystem": "vmd", 00:05:41.913 "config": [] 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "subsystem": "accel", 00:05:41.913 "config": [ 00:05:41.913 { 00:05:41.913 "method": "accel_set_options", 00:05:41.913 "params": { 00:05:41.913 "small_cache_size": 128, 00:05:41.913 "large_cache_size": 16, 00:05:41.913 "task_count": 2048, 00:05:41.913 "sequence_count": 2048, 00:05:41.913 "buf_count": 2048 00:05:41.913 } 00:05:41.913 } 00:05:41.913 ] 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "subsystem": "bdev", 00:05:41.913 "config": [ 00:05:41.913 { 00:05:41.913 "method": "bdev_set_options", 00:05:41.913 "params": { 00:05:41.913 "bdev_io_pool_size": 65535, 00:05:41.913 "bdev_io_cache_size": 256, 00:05:41.913 "bdev_auto_examine": true, 00:05:41.913 "iobuf_small_cache_size": 128, 00:05:41.913 "iobuf_large_cache_size": 16 00:05:41.913 } 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "method": "bdev_raid_set_options", 00:05:41.913 "params": { 00:05:41.913 "process_window_size_kb": 1024 00:05:41.913 } 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "method": "bdev_iscsi_set_options", 00:05:41.913 "params": { 00:05:41.913 "timeout_sec": 30 00:05:41.913 } 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "method": "bdev_nvme_set_options", 00:05:41.913 "params": { 00:05:41.913 "action_on_timeout": "none", 00:05:41.913 "timeout_us": 0, 00:05:41.913 "timeout_admin_us": 0, 00:05:41.913 "keep_alive_timeout_ms": 10000, 00:05:41.913 "arbitration_burst": 0, 00:05:41.913 "low_priority_weight": 0, 00:05:41.913 "medium_priority_weight": 0, 00:05:41.913 "high_priority_weight": 0, 00:05:41.913 "nvme_adminq_poll_period_us": 10000, 00:05:41.913 "nvme_ioq_poll_period_us": 0, 00:05:41.913 "io_queue_requests": 0, 00:05:41.913 "delay_cmd_submit": true, 00:05:41.913 "transport_retry_count": 4, 00:05:41.913 "bdev_retry_count": 3, 00:05:41.913 "transport_ack_timeout": 0, 00:05:41.913 "ctrlr_loss_timeout_sec": 0, 00:05:41.913 "reconnect_delay_sec": 0, 00:05:41.913 "fast_io_fail_timeout_sec": 0, 00:05:41.913 "disable_auto_failback": false, 00:05:41.913 "generate_uuids": false, 00:05:41.913 "transport_tos": 0, 00:05:41.913 "nvme_error_stat": false, 00:05:41.913 "rdma_srq_size": 0, 00:05:41.913 "io_path_stat": false, 00:05:41.913 "allow_accel_sequence": false, 00:05:41.913 "rdma_max_cq_size": 0, 00:05:41.913 "rdma_cm_event_timeout_ms": 0, 00:05:41.913 "dhchap_digests": [ 00:05:41.913 "sha256", 00:05:41.913 "sha384", 00:05:41.913 "sha512" 00:05:41.913 ], 00:05:41.913 "dhchap_dhgroups": [ 00:05:41.913 "null", 00:05:41.913 "ffdhe2048", 00:05:41.913 "ffdhe3072", 00:05:41.913 "ffdhe4096", 00:05:41.913 "ffdhe6144", 00:05:41.913 "ffdhe8192" 00:05:41.913 ] 00:05:41.913 } 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "method": "bdev_nvme_set_hotplug", 00:05:41.913 "params": { 00:05:41.913 "period_us": 100000, 00:05:41.914 "enable": false 00:05:41.914 } 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "method": "bdev_wait_for_examine" 00:05:41.914 } 00:05:41.914 ] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "scsi", 00:05:41.914 "config": null 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "scheduler", 00:05:41.914 "config": [ 00:05:41.914 { 00:05:41.914 "method": "framework_set_scheduler", 00:05:41.914 "params": { 00:05:41.914 "name": "static" 00:05:41.914 } 00:05:41.914 } 00:05:41.914 ] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "vhost_scsi", 00:05:41.914 "config": [] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "vhost_blk", 00:05:41.914 "config": [] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "ublk", 00:05:41.914 "config": [] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "nbd", 00:05:41.914 "config": [] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "nvmf", 00:05:41.914 "config": [ 00:05:41.914 { 00:05:41.914 "method": "nvmf_set_config", 00:05:41.914 "params": { 00:05:41.914 "discovery_filter": "match_any", 00:05:41.914 "admin_cmd_passthru": { 00:05:41.914 "identify_ctrlr": false 00:05:41.914 } 00:05:41.914 } 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "method": "nvmf_set_max_subsystems", 00:05:41.914 "params": { 00:05:41.914 "max_subsystems": 1024 00:05:41.914 } 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "method": "nvmf_set_crdt", 00:05:41.914 "params": { 00:05:41.914 "crdt1": 0, 00:05:41.914 "crdt2": 0, 00:05:41.914 "crdt3": 0 00:05:41.914 } 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "method": "nvmf_create_transport", 00:05:41.914 "params": { 00:05:41.914 "trtype": "TCP", 00:05:41.914 "max_queue_depth": 128, 00:05:41.914 "max_io_qpairs_per_ctrlr": 127, 00:05:41.914 "in_capsule_data_size": 4096, 00:05:41.914 "max_io_size": 131072, 00:05:41.914 "io_unit_size": 131072, 00:05:41.914 "max_aq_depth": 128, 00:05:41.914 "num_shared_buffers": 511, 00:05:41.914 "buf_cache_size": 4294967295, 00:05:41.914 "dif_insert_or_strip": false, 00:05:41.914 "zcopy": false, 00:05:41.914 "c2h_success": true, 00:05:41.914 "sock_priority": 0, 00:05:41.914 "abort_timeout_sec": 1, 00:05:41.914 "ack_timeout": 0, 00:05:41.914 "data_wr_pool_size": 0 00:05:41.914 } 00:05:41.914 } 00:05:41.914 ] 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "subsystem": "iscsi", 00:05:41.914 "config": [ 00:05:41.914 { 00:05:41.914 "method": "iscsi_set_options", 00:05:41.914 "params": { 00:05:41.914 "node_base": "iqn.2016-06.io.spdk", 00:05:41.914 "max_sessions": 128, 00:05:41.914 "max_connections_per_session": 2, 00:05:41.914 "max_queue_depth": 64, 00:05:41.914 "default_time2wait": 2, 00:05:41.914 "default_time2retain": 20, 00:05:41.914 "first_burst_length": 8192, 00:05:41.914 "immediate_data": true, 00:05:41.914 "allow_duplicated_isid": false, 00:05:41.914 "error_recovery_level": 0, 00:05:41.914 "nop_timeout": 60, 00:05:41.914 "nop_in_interval": 30, 00:05:41.914 "disable_chap": false, 00:05:41.914 "require_chap": false, 00:05:41.914 "mutual_chap": false, 00:05:41.914 "chap_group": 0, 00:05:41.914 "max_large_datain_per_connection": 64, 00:05:41.914 "max_r2t_per_connection": 4, 00:05:41.914 "pdu_pool_size": 36864, 00:05:41.914 "immediate_data_pool_size": 16384, 00:05:41.914 "data_out_pool_size": 2048 00:05:41.914 } 00:05:41.914 } 00:05:41.914 ] 00:05:41.914 } 00:05:41.914 ] 00:05:41.914 } 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3844039 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3844039 ']' 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3844039 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3844039 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3844039' 00:05:41.914 killing process with pid 3844039 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3844039 00:05:41.914 19:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3844039 00:05:42.478 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3844179 00:05:42.478 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:42.478 19:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3844179 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3844179 ']' 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3844179 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3844179 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3844179' 00:05:47.804 killing process with pid 3844179 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3844179 00:05:47.804 19:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3844179 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.804 00:05:47.804 real 0m6.479s 00:05:47.804 user 0m6.053s 00:05:47.804 sys 0m0.695s 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.804 ************************************ 00:05:47.804 END TEST skip_rpc_with_json 00:05:47.804 ************************************ 00:05:47.804 19:35:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:47.804 19:35:57 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.804 19:35:57 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.804 19:35:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.804 ************************************ 00:05:47.804 START TEST skip_rpc_with_delay 00:05:47.804 ************************************ 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.804 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.063 [2024-07-25 19:35:57.248464] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.063 [2024-07-25 19:35:57.248566] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:48.063 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:48.063 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.063 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.063 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.063 00:05:48.063 real 0m0.070s 00:05:48.063 user 0m0.041s 00:05:48.063 sys 0m0.028s 00:05:48.063 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.063 19:35:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.063 ************************************ 00:05:48.063 END TEST skip_rpc_with_delay 00:05:48.063 ************************************ 00:05:48.063 19:35:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.063 19:35:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.063 19:35:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.063 19:35:57 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.063 19:35:57 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.063 19:35:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.063 ************************************ 00:05:48.063 START TEST exit_on_failed_rpc_init 00:05:48.063 ************************************ 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3844893 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3844893 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3844893 ']' 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.063 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.063 [2024-07-25 19:35:57.363098] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:48.063 [2024-07-25 19:35:57.363191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844893 ] 00:05:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.063 [2024-07-25 19:35:57.428900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.322 [2024-07-25 19:35:57.525342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.580 19:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.580 [2024-07-25 19:35:57.837349] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:48.580 [2024-07-25 19:35:57.837427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844904 ] 00:05:48.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.580 [2024-07-25 19:35:57.899609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.580 [2024-07-25 19:35:57.996565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.580 [2024-07-25 19:35:57.996665] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.580 [2024-07-25 19:35:57.996688] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.580 [2024-07-25 19:35:57.996702] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3844893 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3844893 ']' 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3844893 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3844893 00:05:48.838 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.839 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.839 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3844893' 00:05:48.839 killing process with pid 3844893 00:05:48.839 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3844893 00:05:48.839 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3844893 00:05:49.096 00:05:49.096 real 0m1.195s 00:05:49.096 user 0m1.304s 00:05:49.096 sys 0m0.466s 00:05:49.096 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.096 19:35:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.096 ************************************ 00:05:49.096 END TEST exit_on_failed_rpc_init 00:05:49.096 ************************************ 00:05:49.354 19:35:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.354 00:05:49.354 real 0m13.429s 00:05:49.354 user 0m12.627s 00:05:49.354 sys 0m1.666s 00:05:49.354 19:35:58 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.354 19:35:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.354 ************************************ 00:05:49.354 END TEST skip_rpc 00:05:49.354 ************************************ 00:05:49.354 19:35:58 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.354 19:35:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.354 19:35:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.354 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.354 ************************************ 00:05:49.354 START TEST rpc_client 00:05:49.354 ************************************ 00:05:49.354 19:35:58 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.354 * Looking for test storage... 00:05:49.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:49.354 19:35:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:49.354 OK 00:05:49.354 19:35:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.354 00:05:49.354 real 0m0.071s 00:05:49.354 user 0m0.028s 00:05:49.354 sys 0m0.048s 00:05:49.354 19:35:58 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.354 19:35:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.354 ************************************ 00:05:49.354 END TEST rpc_client 00:05:49.354 ************************************ 00:05:49.354 19:35:58 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.354 19:35:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.354 19:35:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.354 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.354 ************************************ 00:05:49.354 START TEST json_config 00:05:49.354 ************************************ 00:05:49.354 19:35:58 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.354 19:35:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.354 19:35:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.355 19:35:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.355 19:35:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.355 19:35:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.355 19:35:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.355 19:35:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.355 19:35:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.355 19:35:58 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.355 19:35:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@47 -- # : 0 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.355 19:35:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:49.355 INFO: JSON configuration test init 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.355 19:35:58 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:49.355 19:35:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:49.355 19:35:58 json_config -- json_config/common.sh@10 -- # shift 00:05:49.355 19:35:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.355 19:35:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.355 19:35:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.355 19:35:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.355 19:35:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.355 19:35:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3845146 00:05:49.355 19:35:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:49.355 19:35:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.355 Waiting for target to run... 00:05:49.355 19:35:58 json_config -- json_config/common.sh@25 -- # waitforlisten 3845146 /var/tmp/spdk_tgt.sock 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@827 -- # '[' -z 3845146 ']' 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.355 19:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.615 [2024-07-25 19:35:58.808658] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:49.615 [2024-07-25 19:35:58.808742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845146 ] 00:05:49.615 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.873 [2024-07-25 19:35:59.152626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.873 [2024-07-25 19:35:59.216391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.438 19:35:59 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.438 19:35:59 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:50.439 19:35:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.439 00:05:50.439 19:35:59 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:50.439 19:35:59 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:50.439 19:35:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.439 19:35:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.439 19:35:59 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:50.439 19:35:59 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:50.439 19:35:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.439 19:35:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.439 19:35:59 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:50.439 19:35:59 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:50.439 19:35:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:53.720 19:36:02 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.720 19:36:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:53.720 19:36:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:53.720 19:36:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:53.978 19:36:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.978 19:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:53.978 19:36:03 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.978 19:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:53.978 19:36:03 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:53.978 19:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:54.234 MallocForNvmf0 00:05:54.234 19:36:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.234 19:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.492 MallocForNvmf1 00:05:54.492 19:36:03 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.492 19:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.749 [2024-07-25 19:36:03.944626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.749 19:36:03 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:54.749 19:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.007 19:36:04 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.007 19:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.265 19:36:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.265 19:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.265 19:36:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:55.265 19:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:55.523 [2024-07-25 19:36:04.923922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.523 19:36:04 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:55.523 19:36:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.523 19:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.781 19:36:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:55.781 19:36:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.781 19:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.781 19:36:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:55.781 19:36:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.781 19:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:56.039 MallocBdevForConfigChangeCheck 00:05:56.039 19:36:05 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:56.039 19:36:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.039 19:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.039 19:36:05 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:56.039 19:36:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.297 19:36:05 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:56.297 INFO: shutting down applications... 00:05:56.297 19:36:05 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:56.297 19:36:05 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:56.297 19:36:05 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:56.297 19:36:05 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:58.196 Calling clear_iscsi_subsystem 00:05:58.196 Calling clear_nvmf_subsystem 00:05:58.196 Calling clear_nbd_subsystem 00:05:58.196 Calling clear_ublk_subsystem 00:05:58.196 Calling clear_vhost_blk_subsystem 00:05:58.196 Calling clear_vhost_scsi_subsystem 00:05:58.196 Calling clear_bdev_subsystem 00:05:58.196 19:36:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:58.196 19:36:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:58.196 19:36:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:58.196 19:36:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.196 19:36:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:58.196 19:36:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:58.454 19:36:07 json_config -- json_config/json_config.sh@345 -- # break 00:05:58.454 19:36:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:58.454 19:36:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:58.454 19:36:07 json_config -- json_config/common.sh@31 -- # local app=target 00:05:58.454 19:36:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.454 19:36:07 json_config -- json_config/common.sh@35 -- # [[ -n 3845146 ]] 00:05:58.454 19:36:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3845146 00:05:58.454 19:36:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.454 19:36:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.454 19:36:07 json_config -- json_config/common.sh@41 -- # kill -0 3845146 00:05:58.454 19:36:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.021 19:36:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.021 19:36:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.021 19:36:08 json_config -- json_config/common.sh@41 -- # kill -0 3845146 00:05:59.021 19:36:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.021 19:36:08 json_config -- json_config/common.sh@43 -- # break 00:05:59.021 19:36:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.021 19:36:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.021 SPDK target shutdown done 00:05:59.021 19:36:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:59.021 INFO: relaunching applications... 00:05:59.021 19:36:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.021 19:36:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:59.021 19:36:08 json_config -- json_config/common.sh@10 -- # shift 00:05:59.021 19:36:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.021 19:36:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.021 19:36:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.021 19:36:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.021 19:36:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.021 19:36:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3846928 00:05:59.021 19:36:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.021 19:36:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.021 Waiting for target to run... 00:05:59.021 19:36:08 json_config -- json_config/common.sh@25 -- # waitforlisten 3846928 /var/tmp/spdk_tgt.sock 00:05:59.021 19:36:08 json_config -- common/autotest_common.sh@827 -- # '[' -z 3846928 ']' 00:05:59.021 19:36:08 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.021 19:36:08 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.021 19:36:08 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.021 19:36:08 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.021 19:36:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.021 [2024-07-25 19:36:08.209956] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:59.021 [2024-07-25 19:36:08.210056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846928 ] 00:05:59.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.588 [2024-07-25 19:36:08.717483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.588 [2024-07-25 19:36:08.795752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.870 [2024-07-25 19:36:11.825978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.870 [2024-07-25 19:36:11.858454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.433 19:36:12 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.433 19:36:12 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:03.433 19:36:12 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.433 00:06:03.433 19:36:12 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:03.433 19:36:12 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:03.433 INFO: Checking if target configuration is the same... 00:06:03.433 19:36:12 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.433 19:36:12 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:03.433 19:36:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.433 + '[' 2 -ne 2 ']' 00:06:03.433 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:03.433 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:03.433 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.433 +++ basename /dev/fd/62 00:06:03.433 ++ mktemp /tmp/62.XXX 00:06:03.433 + tmp_file_1=/tmp/62.ebx 00:06:03.433 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.433 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.433 + tmp_file_2=/tmp/spdk_tgt_config.json.gR3 00:06:03.433 + ret=0 00:06:03.433 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.690 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.690 + diff -u /tmp/62.ebx /tmp/spdk_tgt_config.json.gR3 00:06:03.690 + echo 'INFO: JSON config files are the same' 00:06:03.690 INFO: JSON config files are the same 00:06:03.690 + rm /tmp/62.ebx /tmp/spdk_tgt_config.json.gR3 00:06:03.690 + exit 0 00:06:03.690 19:36:13 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:03.690 19:36:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:03.690 INFO: changing configuration and checking if this can be detected... 00:06:03.690 19:36:13 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.691 19:36:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.948 19:36:13 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.948 19:36:13 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:03.948 19:36:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.948 + '[' 2 -ne 2 ']' 00:06:03.948 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:03.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:03.948 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.948 +++ basename /dev/fd/62 00:06:03.948 ++ mktemp /tmp/62.XXX 00:06:03.948 + tmp_file_1=/tmp/62.2RB 00:06:03.948 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.948 + tmp_file_2=/tmp/spdk_tgt_config.json.isf 00:06:03.948 + ret=0 00:06:03.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.513 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.513 + diff -u /tmp/62.2RB /tmp/spdk_tgt_config.json.isf 00:06:04.513 + ret=1 00:06:04.513 + echo '=== Start of file: /tmp/62.2RB ===' 00:06:04.513 + cat /tmp/62.2RB 00:06:04.513 + echo '=== End of file: /tmp/62.2RB ===' 00:06:04.513 + echo '' 00:06:04.513 + echo '=== Start of file: /tmp/spdk_tgt_config.json.isf ===' 00:06:04.513 + cat /tmp/spdk_tgt_config.json.isf 00:06:04.513 + echo '=== End of file: /tmp/spdk_tgt_config.json.isf ===' 00:06:04.513 + echo '' 00:06:04.513 + rm /tmp/62.2RB /tmp/spdk_tgt_config.json.isf 00:06:04.513 + exit 1 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:04.513 INFO: configuration change detected. 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@317 -- # [[ -n 3846928 ]] 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.513 19:36:13 json_config -- json_config/json_config.sh@323 -- # killprocess 3846928 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@946 -- # '[' -z 3846928 ']' 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@950 -- # kill -0 3846928 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@951 -- # uname 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3846928 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3846928' 00:06:04.513 killing process with pid 3846928 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@965 -- # kill 3846928 00:06:04.513 19:36:13 json_config -- common/autotest_common.sh@970 -- # wait 3846928 00:06:06.409 19:36:15 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.409 19:36:15 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:06.409 19:36:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.409 19:36:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.409 19:36:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:06.409 19:36:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:06.409 INFO: Success 00:06:06.409 00:06:06.409 real 0m16.738s 00:06:06.409 user 0m18.727s 00:06:06.409 sys 0m2.032s 00:06:06.409 19:36:15 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.409 19:36:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.409 ************************************ 00:06:06.409 END TEST json_config 00:06:06.409 ************************************ 00:06:06.409 19:36:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:06.409 19:36:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.409 19:36:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.409 19:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.409 ************************************ 00:06:06.409 START TEST json_config_extra_key 00:06:06.409 ************************************ 00:06:06.409 19:36:15 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:06.409 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.409 19:36:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.410 19:36:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.410 19:36:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.410 19:36:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.410 19:36:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.410 19:36:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.410 19:36:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.410 19:36:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.410 19:36:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.410 19:36:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.410 INFO: launching applications... 00:06:06.410 19:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3847991 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.410 Waiting for target to run... 00:06:06.410 19:36:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3847991 /var/tmp/spdk_tgt.sock 00:06:06.410 19:36:15 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3847991 ']' 00:06:06.410 19:36:15 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.410 19:36:15 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.410 19:36:15 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.410 19:36:15 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.410 19:36:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.410 [2024-07-25 19:36:15.591485] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:06.410 [2024-07-25 19:36:15.591584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847991 ] 00:06:06.410 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.668 [2024-07-25 19:36:16.085163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.925 [2024-07-25 19:36:16.167303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.182 19:36:16 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.182 19:36:16 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:07.182 00:06:07.182 19:36:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:07.182 INFO: shutting down applications... 00:06:07.182 19:36:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3847991 ]] 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3847991 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3847991 00:06:07.182 19:36:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3847991 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.775 19:36:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.775 SPDK target shutdown done 00:06:07.775 19:36:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:07.775 Success 00:06:07.775 00:06:07.775 real 0m1.536s 00:06:07.775 user 0m1.359s 00:06:07.775 sys 0m0.570s 00:06:07.775 19:36:17 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.775 19:36:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.775 ************************************ 00:06:07.775 END TEST json_config_extra_key 00:06:07.775 ************************************ 00:06:07.775 19:36:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.775 19:36:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.775 19:36:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.775 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:07.775 ************************************ 00:06:07.775 START TEST alias_rpc 00:06:07.775 ************************************ 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.775 * Looking for test storage... 00:06:07.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:07.775 19:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.775 19:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3848306 00:06:07.775 19:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.775 19:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3848306 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3848306 ']' 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.775 19:36:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.775 [2024-07-25 19:36:17.169214] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:07.775 [2024-07-25 19:36:17.169297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848306 ] 00:06:07.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.033 [2024-07-25 19:36:17.227089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.033 [2024-07-25 19:36:17.310564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.291 19:36:17 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.291 19:36:17 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.291 19:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:08.548 19:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3848306 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3848306 ']' 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3848306 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3848306 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3848306' 00:06:08.548 killing process with pid 3848306 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@965 -- # kill 3848306 00:06:08.548 19:36:17 alias_rpc -- common/autotest_common.sh@970 -- # wait 3848306 00:06:09.114 00:06:09.114 real 0m1.179s 00:06:09.114 user 0m1.263s 00:06:09.114 sys 0m0.411s 00:06:09.114 19:36:18 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.114 19:36:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.114 ************************************ 00:06:09.114 END TEST alias_rpc 00:06:09.114 ************************************ 00:06:09.114 19:36:18 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:09.114 19:36:18 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:09.114 19:36:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.114 19:36:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.114 19:36:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.114 ************************************ 00:06:09.114 START TEST spdkcli_tcp 00:06:09.114 ************************************ 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:09.114 * Looking for test storage... 00:06:09.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3848491 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:09.114 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3848491 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3848491 ']' 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.114 19:36:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.114 [2024-07-25 19:36:18.407652] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:09.114 [2024-07-25 19:36:18.407775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848491 ] 00:06:09.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.114 [2024-07-25 19:36:18.465758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.372 [2024-07-25 19:36:18.550837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.372 [2024-07-25 19:36:18.550840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.630 19:36:18 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.630 19:36:18 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:09.630 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3848495 00:06:09.630 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.630 19:36:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:09.630 [ 00:06:09.630 "bdev_malloc_delete", 00:06:09.630 "bdev_malloc_create", 00:06:09.630 "bdev_null_resize", 00:06:09.630 "bdev_null_delete", 00:06:09.630 "bdev_null_create", 00:06:09.630 "bdev_nvme_cuse_unregister", 00:06:09.630 "bdev_nvme_cuse_register", 00:06:09.630 "bdev_opal_new_user", 00:06:09.630 "bdev_opal_set_lock_state", 00:06:09.630 "bdev_opal_delete", 00:06:09.630 "bdev_opal_get_info", 00:06:09.630 "bdev_opal_create", 00:06:09.630 "bdev_nvme_opal_revert", 00:06:09.630 "bdev_nvme_opal_init", 00:06:09.630 "bdev_nvme_send_cmd", 00:06:09.630 "bdev_nvme_get_path_iostat", 00:06:09.630 "bdev_nvme_get_mdns_discovery_info", 00:06:09.630 "bdev_nvme_stop_mdns_discovery", 00:06:09.630 "bdev_nvme_start_mdns_discovery", 00:06:09.630 "bdev_nvme_set_multipath_policy", 00:06:09.630 "bdev_nvme_set_preferred_path", 00:06:09.630 "bdev_nvme_get_io_paths", 00:06:09.630 "bdev_nvme_remove_error_injection", 00:06:09.630 "bdev_nvme_add_error_injection", 00:06:09.630 "bdev_nvme_get_discovery_info", 00:06:09.630 "bdev_nvme_stop_discovery", 00:06:09.630 "bdev_nvme_start_discovery", 00:06:09.630 "bdev_nvme_get_controller_health_info", 00:06:09.630 "bdev_nvme_disable_controller", 00:06:09.630 "bdev_nvme_enable_controller", 00:06:09.630 "bdev_nvme_reset_controller", 00:06:09.630 "bdev_nvme_get_transport_statistics", 00:06:09.630 "bdev_nvme_apply_firmware", 00:06:09.630 "bdev_nvme_detach_controller", 00:06:09.630 "bdev_nvme_get_controllers", 00:06:09.630 "bdev_nvme_attach_controller", 00:06:09.630 "bdev_nvme_set_hotplug", 00:06:09.630 "bdev_nvme_set_options", 00:06:09.630 "bdev_passthru_delete", 00:06:09.630 "bdev_passthru_create", 00:06:09.630 "bdev_lvol_set_parent_bdev", 00:06:09.630 "bdev_lvol_set_parent", 00:06:09.630 "bdev_lvol_check_shallow_copy", 00:06:09.630 "bdev_lvol_start_shallow_copy", 00:06:09.630 "bdev_lvol_grow_lvstore", 00:06:09.630 "bdev_lvol_get_lvols", 00:06:09.630 "bdev_lvol_get_lvstores", 00:06:09.630 "bdev_lvol_delete", 00:06:09.630 "bdev_lvol_set_read_only", 00:06:09.630 "bdev_lvol_resize", 00:06:09.630 "bdev_lvol_decouple_parent", 00:06:09.630 "bdev_lvol_inflate", 00:06:09.630 "bdev_lvol_rename", 00:06:09.630 "bdev_lvol_clone_bdev", 00:06:09.630 "bdev_lvol_clone", 00:06:09.630 "bdev_lvol_snapshot", 00:06:09.630 "bdev_lvol_create", 00:06:09.630 "bdev_lvol_delete_lvstore", 00:06:09.630 "bdev_lvol_rename_lvstore", 00:06:09.630 "bdev_lvol_create_lvstore", 00:06:09.630 "bdev_raid_set_options", 00:06:09.630 "bdev_raid_remove_base_bdev", 00:06:09.630 "bdev_raid_add_base_bdev", 00:06:09.631 "bdev_raid_delete", 00:06:09.631 "bdev_raid_create", 00:06:09.631 "bdev_raid_get_bdevs", 00:06:09.631 "bdev_error_inject_error", 00:06:09.631 "bdev_error_delete", 00:06:09.631 "bdev_error_create", 00:06:09.631 "bdev_split_delete", 00:06:09.631 "bdev_split_create", 00:06:09.631 "bdev_delay_delete", 00:06:09.631 "bdev_delay_create", 00:06:09.631 "bdev_delay_update_latency", 00:06:09.631 "bdev_zone_block_delete", 00:06:09.631 "bdev_zone_block_create", 00:06:09.631 "blobfs_create", 00:06:09.631 "blobfs_detect", 00:06:09.631 "blobfs_set_cache_size", 00:06:09.631 "bdev_aio_delete", 00:06:09.631 "bdev_aio_rescan", 00:06:09.631 "bdev_aio_create", 00:06:09.631 "bdev_ftl_set_property", 00:06:09.631 "bdev_ftl_get_properties", 00:06:09.631 "bdev_ftl_get_stats", 00:06:09.631 "bdev_ftl_unmap", 00:06:09.631 "bdev_ftl_unload", 00:06:09.631 "bdev_ftl_delete", 00:06:09.631 "bdev_ftl_load", 00:06:09.631 "bdev_ftl_create", 00:06:09.631 "bdev_virtio_attach_controller", 00:06:09.631 "bdev_virtio_scsi_get_devices", 00:06:09.631 "bdev_virtio_detach_controller", 00:06:09.631 "bdev_virtio_blk_set_hotplug", 00:06:09.631 "bdev_iscsi_delete", 00:06:09.631 "bdev_iscsi_create", 00:06:09.631 "bdev_iscsi_set_options", 00:06:09.631 "accel_error_inject_error", 00:06:09.631 "ioat_scan_accel_module", 00:06:09.631 "dsa_scan_accel_module", 00:06:09.631 "iaa_scan_accel_module", 00:06:09.631 "vfu_virtio_create_scsi_endpoint", 00:06:09.631 "vfu_virtio_scsi_remove_target", 00:06:09.631 "vfu_virtio_scsi_add_target", 00:06:09.631 "vfu_virtio_create_blk_endpoint", 00:06:09.631 "vfu_virtio_delete_endpoint", 00:06:09.631 "keyring_file_remove_key", 00:06:09.631 "keyring_file_add_key", 00:06:09.631 "keyring_linux_set_options", 00:06:09.631 "iscsi_get_histogram", 00:06:09.631 "iscsi_enable_histogram", 00:06:09.631 "iscsi_set_options", 00:06:09.631 "iscsi_get_auth_groups", 00:06:09.631 "iscsi_auth_group_remove_secret", 00:06:09.631 "iscsi_auth_group_add_secret", 00:06:09.631 "iscsi_delete_auth_group", 00:06:09.631 "iscsi_create_auth_group", 00:06:09.631 "iscsi_set_discovery_auth", 00:06:09.631 "iscsi_get_options", 00:06:09.631 "iscsi_target_node_request_logout", 00:06:09.631 "iscsi_target_node_set_redirect", 00:06:09.631 "iscsi_target_node_set_auth", 00:06:09.631 "iscsi_target_node_add_lun", 00:06:09.631 "iscsi_get_stats", 00:06:09.631 "iscsi_get_connections", 00:06:09.631 "iscsi_portal_group_set_auth", 00:06:09.631 "iscsi_start_portal_group", 00:06:09.631 "iscsi_delete_portal_group", 00:06:09.631 "iscsi_create_portal_group", 00:06:09.631 "iscsi_get_portal_groups", 00:06:09.631 "iscsi_delete_target_node", 00:06:09.631 "iscsi_target_node_remove_pg_ig_maps", 00:06:09.631 "iscsi_target_node_add_pg_ig_maps", 00:06:09.631 "iscsi_create_target_node", 00:06:09.631 "iscsi_get_target_nodes", 00:06:09.631 "iscsi_delete_initiator_group", 00:06:09.631 "iscsi_initiator_group_remove_initiators", 00:06:09.631 "iscsi_initiator_group_add_initiators", 00:06:09.631 "iscsi_create_initiator_group", 00:06:09.631 "iscsi_get_initiator_groups", 00:06:09.631 "nvmf_set_crdt", 00:06:09.631 "nvmf_set_config", 00:06:09.631 "nvmf_set_max_subsystems", 00:06:09.631 "nvmf_stop_mdns_prr", 00:06:09.631 "nvmf_publish_mdns_prr", 00:06:09.631 "nvmf_subsystem_get_listeners", 00:06:09.631 "nvmf_subsystem_get_qpairs", 00:06:09.631 "nvmf_subsystem_get_controllers", 00:06:09.631 "nvmf_get_stats", 00:06:09.631 "nvmf_get_transports", 00:06:09.631 "nvmf_create_transport", 00:06:09.631 "nvmf_get_targets", 00:06:09.631 "nvmf_delete_target", 00:06:09.631 "nvmf_create_target", 00:06:09.631 "nvmf_subsystem_allow_any_host", 00:06:09.631 "nvmf_subsystem_remove_host", 00:06:09.631 "nvmf_subsystem_add_host", 00:06:09.631 "nvmf_ns_remove_host", 00:06:09.631 "nvmf_ns_add_host", 00:06:09.631 "nvmf_subsystem_remove_ns", 00:06:09.631 "nvmf_subsystem_add_ns", 00:06:09.631 "nvmf_subsystem_listener_set_ana_state", 00:06:09.631 "nvmf_discovery_get_referrals", 00:06:09.631 "nvmf_discovery_remove_referral", 00:06:09.631 "nvmf_discovery_add_referral", 00:06:09.631 "nvmf_subsystem_remove_listener", 00:06:09.631 "nvmf_subsystem_add_listener", 00:06:09.631 "nvmf_delete_subsystem", 00:06:09.631 "nvmf_create_subsystem", 00:06:09.631 "nvmf_get_subsystems", 00:06:09.631 "env_dpdk_get_mem_stats", 00:06:09.631 "nbd_get_disks", 00:06:09.631 "nbd_stop_disk", 00:06:09.631 "nbd_start_disk", 00:06:09.631 "ublk_recover_disk", 00:06:09.631 "ublk_get_disks", 00:06:09.631 "ublk_stop_disk", 00:06:09.631 "ublk_start_disk", 00:06:09.631 "ublk_destroy_target", 00:06:09.631 "ublk_create_target", 00:06:09.631 "virtio_blk_create_transport", 00:06:09.631 "virtio_blk_get_transports", 00:06:09.631 "vhost_controller_set_coalescing", 00:06:09.631 "vhost_get_controllers", 00:06:09.631 "vhost_delete_controller", 00:06:09.631 "vhost_create_blk_controller", 00:06:09.631 "vhost_scsi_controller_remove_target", 00:06:09.631 "vhost_scsi_controller_add_target", 00:06:09.631 "vhost_start_scsi_controller", 00:06:09.631 "vhost_create_scsi_controller", 00:06:09.631 "thread_set_cpumask", 00:06:09.631 "framework_get_scheduler", 00:06:09.631 "framework_set_scheduler", 00:06:09.631 "framework_get_reactors", 00:06:09.631 "thread_get_io_channels", 00:06:09.631 "thread_get_pollers", 00:06:09.631 "thread_get_stats", 00:06:09.631 "framework_monitor_context_switch", 00:06:09.631 "spdk_kill_instance", 00:06:09.631 "log_enable_timestamps", 00:06:09.631 "log_get_flags", 00:06:09.631 "log_clear_flag", 00:06:09.631 "log_set_flag", 00:06:09.631 "log_get_level", 00:06:09.631 "log_set_level", 00:06:09.631 "log_get_print_level", 00:06:09.631 "log_set_print_level", 00:06:09.631 "framework_enable_cpumask_locks", 00:06:09.631 "framework_disable_cpumask_locks", 00:06:09.631 "framework_wait_init", 00:06:09.631 "framework_start_init", 00:06:09.631 "scsi_get_devices", 00:06:09.631 "bdev_get_histogram", 00:06:09.631 "bdev_enable_histogram", 00:06:09.631 "bdev_set_qos_limit", 00:06:09.631 "bdev_set_qd_sampling_period", 00:06:09.631 "bdev_get_bdevs", 00:06:09.631 "bdev_reset_iostat", 00:06:09.631 "bdev_get_iostat", 00:06:09.631 "bdev_examine", 00:06:09.631 "bdev_wait_for_examine", 00:06:09.631 "bdev_set_options", 00:06:09.631 "notify_get_notifications", 00:06:09.631 "notify_get_types", 00:06:09.631 "accel_get_stats", 00:06:09.631 "accel_set_options", 00:06:09.631 "accel_set_driver", 00:06:09.631 "accel_crypto_key_destroy", 00:06:09.631 "accel_crypto_keys_get", 00:06:09.631 "accel_crypto_key_create", 00:06:09.631 "accel_assign_opc", 00:06:09.631 "accel_get_module_info", 00:06:09.631 "accel_get_opc_assignments", 00:06:09.631 "vmd_rescan", 00:06:09.631 "vmd_remove_device", 00:06:09.631 "vmd_enable", 00:06:09.631 "sock_get_default_impl", 00:06:09.631 "sock_set_default_impl", 00:06:09.631 "sock_impl_set_options", 00:06:09.631 "sock_impl_get_options", 00:06:09.631 "iobuf_get_stats", 00:06:09.631 "iobuf_set_options", 00:06:09.631 "keyring_get_keys", 00:06:09.631 "framework_get_pci_devices", 00:06:09.631 "framework_get_config", 00:06:09.631 "framework_get_subsystems", 00:06:09.631 "vfu_tgt_set_base_path", 00:06:09.631 "trace_get_info", 00:06:09.631 "trace_get_tpoint_group_mask", 00:06:09.631 "trace_disable_tpoint_group", 00:06:09.631 "trace_enable_tpoint_group", 00:06:09.631 "trace_clear_tpoint_mask", 00:06:09.631 "trace_set_tpoint_mask", 00:06:09.631 "spdk_get_version", 00:06:09.631 "rpc_get_methods" 00:06:09.631 ] 00:06:09.889 19:36:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 19:36:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:09.889 19:36:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3848491 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3848491 ']' 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3848491 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3848491 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3848491' 00:06:09.889 killing process with pid 3848491 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3848491 00:06:09.889 19:36:19 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3848491 00:06:10.147 00:06:10.147 real 0m1.201s 00:06:10.147 user 0m2.128s 00:06:10.147 sys 0m0.434s 00:06:10.147 19:36:19 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.147 19:36:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.147 ************************************ 00:06:10.147 END TEST spdkcli_tcp 00:06:10.147 ************************************ 00:06:10.147 19:36:19 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.147 19:36:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.148 19:36:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.148 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.148 ************************************ 00:06:10.148 START TEST dpdk_mem_utility 00:06:10.148 ************************************ 00:06:10.148 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.406 * Looking for test storage... 00:06:10.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:10.406 19:36:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.406 19:36:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3848691 00:06:10.406 19:36:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.406 19:36:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3848691 00:06:10.406 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3848691 ']' 00:06:10.406 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.406 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.406 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.406 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.406 19:36:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 [2024-07-25 19:36:19.650648] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:10.406 [2024-07-25 19:36:19.650731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848691 ] 00:06:10.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.406 [2024-07-25 19:36:19.708232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.406 [2024-07-25 19:36:19.792249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.664 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.664 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:10.664 19:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:10.664 19:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:10.664 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.664 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.664 { 00:06:10.664 "filename": "/tmp/spdk_mem_dump.txt" 00:06:10.664 } 00:06:10.664 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.664 19:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.922 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:10.922 1 heaps totaling size 814.000000 MiB 00:06:10.922 size: 814.000000 MiB heap id: 0 00:06:10.922 end heaps---------- 00:06:10.922 8 mempools totaling size 598.116089 MiB 00:06:10.922 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:10.922 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:10.922 size: 84.521057 MiB name: bdev_io_3848691 00:06:10.922 size: 51.011292 MiB name: evtpool_3848691 00:06:10.922 size: 50.003479 MiB name: msgpool_3848691 00:06:10.922 size: 21.763794 MiB name: PDU_Pool 00:06:10.922 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:10.922 size: 0.026123 MiB name: Session_Pool 00:06:10.922 end mempools------- 00:06:10.922 6 memzones totaling size 4.142822 MiB 00:06:10.922 size: 1.000366 MiB name: RG_ring_0_3848691 00:06:10.922 size: 1.000366 MiB name: RG_ring_1_3848691 00:06:10.922 size: 1.000366 MiB name: RG_ring_4_3848691 00:06:10.922 size: 1.000366 MiB name: RG_ring_5_3848691 00:06:10.922 size: 0.125366 MiB name: RG_ring_2_3848691 00:06:10.922 size: 0.015991 MiB name: RG_ring_3_3848691 00:06:10.922 end memzones------- 00:06:10.922 19:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:10.923 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:10.923 list of free elements. size: 12.519348 MiB 00:06:10.923 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:10.923 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:10.923 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:10.923 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:10.923 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:10.923 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:10.923 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:10.923 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:10.923 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:10.923 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:10.923 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:10.923 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:10.923 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:10.923 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:10.923 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:10.923 list of standard malloc elements. size: 199.218079 MiB 00:06:10.923 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:10.923 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:10.923 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:10.923 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:10.923 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:10.923 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:10.923 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:10.923 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:10.923 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:10.923 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:10.923 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:10.923 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:10.923 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:10.923 list of memzone associated elements. size: 602.262573 MiB 00:06:10.923 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:10.923 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:10.923 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:10.923 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:10.923 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:10.923 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3848691_0 00:06:10.923 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:10.923 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3848691_0 00:06:10.923 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:10.923 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3848691_0 00:06:10.923 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:10.923 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:10.923 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:10.923 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:10.923 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:10.923 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3848691 00:06:10.923 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:10.923 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3848691 00:06:10.923 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:10.923 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3848691 00:06:10.923 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:10.923 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:10.923 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:10.923 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:10.923 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:10.923 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:10.923 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:10.923 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:10.923 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:10.923 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3848691 00:06:10.923 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:10.923 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3848691 00:06:10.923 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:10.923 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3848691 00:06:10.923 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:10.923 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3848691 00:06:10.923 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:10.923 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3848691 00:06:10.923 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:10.923 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:10.923 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:10.923 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:10.923 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:10.923 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:10.923 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:10.923 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3848691 00:06:10.923 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:10.923 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:10.923 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:10.923 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:10.923 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:10.923 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3848691 00:06:10.923 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:10.923 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:10.923 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:10.923 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3848691 00:06:10.923 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:10.923 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3848691 00:06:10.923 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:10.923 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:10.923 19:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:10.923 19:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3848691 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3848691 ']' 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3848691 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3848691 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3848691' 00:06:10.923 killing process with pid 3848691 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3848691 00:06:10.923 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3848691 00:06:11.182 00:06:11.182 real 0m1.050s 00:06:11.182 user 0m1.025s 00:06:11.182 sys 0m0.402s 00:06:11.182 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.182 19:36:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.182 ************************************ 00:06:11.182 END TEST dpdk_mem_utility 00:06:11.182 ************************************ 00:06:11.440 19:36:20 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:11.440 19:36:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.440 19:36:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.440 19:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:11.440 ************************************ 00:06:11.440 START TEST event 00:06:11.440 ************************************ 00:06:11.440 19:36:20 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:11.440 * Looking for test storage... 00:06:11.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:11.440 19:36:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:11.440 19:36:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:11.440 19:36:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.440 19:36:20 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:11.440 19:36:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.440 19:36:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.440 ************************************ 00:06:11.440 START TEST event_perf 00:06:11.440 ************************************ 00:06:11.440 19:36:20 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.440 Running I/O for 1 seconds...[2024-07-25 19:36:20.742243] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:11.440 [2024-07-25 19:36:20.742304] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848881 ] 00:06:11.440 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.440 [2024-07-25 19:36:20.808052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.699 [2024-07-25 19:36:20.900630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.699 [2024-07-25 19:36:20.900699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.699 [2024-07-25 19:36:20.900800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.699 [2024-07-25 19:36:20.900802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.633 Running I/O for 1 seconds... 00:06:12.633 lcore 0: 240035 00:06:12.633 lcore 1: 240033 00:06:12.633 lcore 2: 240033 00:06:12.633 lcore 3: 240033 00:06:12.633 done. 00:06:12.633 00:06:12.633 real 0m1.256s 00:06:12.633 user 0m4.160s 00:06:12.633 sys 0m0.091s 00:06:12.633 19:36:21 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.633 19:36:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.633 ************************************ 00:06:12.633 END TEST event_perf 00:06:12.633 ************************************ 00:06:12.633 19:36:22 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.633 19:36:22 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:12.633 19:36:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.633 19:36:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.633 ************************************ 00:06:12.633 START TEST event_reactor 00:06:12.633 ************************************ 00:06:12.633 19:36:22 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.633 [2024-07-25 19:36:22.044164] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:12.633 [2024-07-25 19:36:22.044228] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849042 ] 00:06:12.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.892 [2024-07-25 19:36:22.107161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.892 [2024-07-25 19:36:22.199426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.265 test_start 00:06:14.265 oneshot 00:06:14.265 tick 100 00:06:14.265 tick 100 00:06:14.265 tick 250 00:06:14.265 tick 100 00:06:14.265 tick 100 00:06:14.265 tick 100 00:06:14.265 tick 250 00:06:14.265 tick 500 00:06:14.265 tick 100 00:06:14.265 tick 100 00:06:14.265 tick 250 00:06:14.265 tick 100 00:06:14.265 tick 100 00:06:14.265 test_end 00:06:14.265 00:06:14.265 real 0m1.251s 00:06:14.265 user 0m1.163s 00:06:14.265 sys 0m0.083s 00:06:14.265 19:36:23 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.265 19:36:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:14.265 ************************************ 00:06:14.265 END TEST event_reactor 00:06:14.265 ************************************ 00:06:14.265 19:36:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:14.265 19:36:23 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:14.265 19:36:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.265 19:36:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.265 ************************************ 00:06:14.265 START TEST event_reactor_perf 00:06:14.265 ************************************ 00:06:14.265 19:36:23 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:14.265 [2024-07-25 19:36:23.344163] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:14.265 [2024-07-25 19:36:23.344225] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849196 ] 00:06:14.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.265 [2024-07-25 19:36:23.406052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.265 [2024-07-25 19:36:23.498315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.199 test_start 00:06:15.199 test_end 00:06:15.199 Performance: 354381 events per second 00:06:15.199 00:06:15.199 real 0m1.250s 00:06:15.199 user 0m1.156s 00:06:15.199 sys 0m0.090s 00:06:15.199 19:36:24 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.199 19:36:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.199 ************************************ 00:06:15.199 END TEST event_reactor_perf 00:06:15.199 ************************************ 00:06:15.199 19:36:24 event -- event/event.sh@49 -- # uname -s 00:06:15.199 19:36:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:15.199 19:36:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:15.199 19:36:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.199 19:36:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.199 19:36:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.199 ************************************ 00:06:15.199 START TEST event_scheduler 00:06:15.199 ************************************ 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:15.458 * Looking for test storage... 00:06:15.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:15.458 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:15.458 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3849374 00:06:15.458 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:15.458 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.458 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3849374 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3849374 ']' 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.458 19:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.458 [2024-07-25 19:36:24.724251] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:15.458 [2024-07-25 19:36:24.724338] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849374 ] 00:06:15.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.458 [2024-07-25 19:36:24.788237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.458 [2024-07-25 19:36:24.884174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.458 [2024-07-25 19:36:24.884199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.458 [2024-07-25 19:36:24.884244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.458 [2024-07-25 19:36:24.884247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:15.716 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 POWER: Env isn't set yet! 00:06:15.716 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:15.716 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:15.716 POWER: Cannot get available frequencies of lcore 0 00:06:15.716 POWER: Attempting to initialise PSTAT power management... 00:06:15.716 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:15.716 POWER: Initialized successfully for lcore 0 power management 00:06:15.716 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:15.716 POWER: Initialized successfully for lcore 1 power management 00:06:15.716 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:15.716 POWER: Initialized successfully for lcore 2 power management 00:06:15.716 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:15.716 POWER: Initialized successfully for lcore 3 power management 00:06:15.716 [2024-07-25 19:36:24.991254] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.716 [2024-07-25 19:36:24.991272] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.716 [2024-07-25 19:36:24.991282] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.716 19:36:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.716 19:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 [2024-07-25 19:36:25.091745] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.716 19:36:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.716 19:36:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.716 19:36:25 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.716 19:36:25 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.716 19:36:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 ************************************ 00:06:15.716 START TEST scheduler_create_thread 00:06:15.716 ************************************ 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 2 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 3 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.716 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.974 4 00:06:15.974 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 5 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 6 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 7 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 8 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 9 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 10 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.975 19:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.348 19:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.348 19:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.348 19:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.348 19:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.348 19:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 19:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.721 00:06:18.721 real 0m2.619s 00:06:18.721 user 0m0.011s 00:06:18.721 sys 0m0.004s 00:06:18.721 19:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.721 19:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 ************************************ 00:06:18.721 END TEST scheduler_create_thread 00:06:18.721 ************************************ 00:06:18.721 19:36:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.721 19:36:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3849374 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3849374 ']' 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3849374 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3849374 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3849374' 00:06:18.721 killing process with pid 3849374 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3849374 00:06:18.721 19:36:27 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3849374 00:06:18.979 [2024-07-25 19:36:28.219246] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.979 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:18.979 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:18.979 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:18.979 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:18.979 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:18.979 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:18.979 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:18.979 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:19.235 00:06:19.235 real 0m3.824s 00:06:19.235 user 0m5.837s 00:06:19.235 sys 0m0.346s 00:06:19.235 19:36:28 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.235 19:36:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.235 ************************************ 00:06:19.235 END TEST event_scheduler 00:06:19.235 ************************************ 00:06:19.235 19:36:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.235 19:36:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.235 19:36:28 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.235 19:36:28 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.235 19:36:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.235 ************************************ 00:06:19.235 START TEST app_repeat 00:06:19.235 ************************************ 00:06:19.235 19:36:28 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3849952 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3849952' 00:06:19.235 Process app_repeat pid: 3849952 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.235 spdk_app_start Round 0 00:06:19.235 19:36:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3849952 /var/tmp/spdk-nbd.sock 00:06:19.235 19:36:28 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3849952 ']' 00:06:19.235 19:36:28 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.236 19:36:28 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.236 19:36:28 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.236 19:36:28 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.236 19:36:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.236 [2024-07-25 19:36:28.531495] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:19.236 [2024-07-25 19:36:28.531564] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849952 ] 00:06:19.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.236 [2024-07-25 19:36:28.595415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.492 [2024-07-25 19:36:28.685048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.492 [2024-07-25 19:36:28.685054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.492 19:36:28 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.492 19:36:28 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:19.492 19:36:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.750 Malloc0 00:06:19.750 19:36:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.007 Malloc1 00:06:20.007 19:36:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.007 19:36:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.264 /dev/nbd0 00:06:20.264 19:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.264 19:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.264 1+0 records in 00:06:20.264 1+0 records out 00:06:20.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193707 s, 21.1 MB/s 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.264 19:36:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.264 19:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.264 19:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.264 19:36:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.522 /dev/nbd1 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.522 1+0 records in 00:06:20.522 1+0 records out 00:06:20.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202497 s, 20.2 MB/s 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.522 19:36:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.522 19:36:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.779 { 00:06:20.779 "nbd_device": "/dev/nbd0", 00:06:20.779 "bdev_name": "Malloc0" 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "nbd_device": "/dev/nbd1", 00:06:20.779 "bdev_name": "Malloc1" 00:06:20.779 } 00:06:20.779 ]' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.779 { 00:06:20.779 "nbd_device": "/dev/nbd0", 00:06:20.779 "bdev_name": "Malloc0" 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "nbd_device": "/dev/nbd1", 00:06:20.779 "bdev_name": "Malloc1" 00:06:20.779 } 00:06:20.779 ]' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.779 /dev/nbd1' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.779 /dev/nbd1' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.779 256+0 records in 00:06:20.779 256+0 records out 00:06:20.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502786 s, 209 MB/s 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.779 256+0 records in 00:06:20.779 256+0 records out 00:06:20.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262046 s, 40.0 MB/s 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.779 19:36:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.036 256+0 records in 00:06:21.036 256+0 records out 00:06:21.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261125 s, 40.2 MB/s 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.036 19:36:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.294 19:36:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.551 19:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.551 19:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.552 19:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.809 19:36:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.809 19:36:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.068 19:36:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.326 [2024-07-25 19:36:31.563751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.326 [2024-07-25 19:36:31.650961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.326 [2024-07-25 19:36:31.650964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.326 [2024-07-25 19:36:31.708622] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.326 [2024-07-25 19:36:31.708693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.606 19:36:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.606 19:36:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.606 spdk_app_start Round 1 00:06:25.607 19:36:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3849952 /var/tmp/spdk-nbd.sock 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3849952 ']' 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.607 19:36:34 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:25.607 19:36:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.607 Malloc0 00:06:25.607 19:36:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.865 Malloc1 00:06:25.865 19:36:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.865 19:36:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.145 /dev/nbd0 00:06:26.145 19:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.145 19:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:26.145 19:36:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.145 1+0 records in 00:06:26.145 1+0 records out 00:06:26.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252195 s, 16.2 MB/s 00:06:26.146 19:36:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.146 19:36:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:26.146 19:36:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.146 19:36:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:26.146 19:36:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:26.146 19:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.146 19:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.146 19:36:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.420 /dev/nbd1 00:06:26.420 19:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.420 19:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.420 19:36:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:26.420 19:36:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:26.420 19:36:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:26.420 19:36:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.421 1+0 records in 00:06:26.421 1+0 records out 00:06:26.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253872 s, 16.1 MB/s 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:26.421 19:36:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:26.421 19:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.421 19:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.421 19:36:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.421 19:36:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.421 19:36:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.679 { 00:06:26.679 "nbd_device": "/dev/nbd0", 00:06:26.679 "bdev_name": "Malloc0" 00:06:26.679 }, 00:06:26.679 { 00:06:26.679 "nbd_device": "/dev/nbd1", 00:06:26.679 "bdev_name": "Malloc1" 00:06:26.679 } 00:06:26.679 ]' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.679 { 00:06:26.679 "nbd_device": "/dev/nbd0", 00:06:26.679 "bdev_name": "Malloc0" 00:06:26.679 }, 00:06:26.679 { 00:06:26.679 "nbd_device": "/dev/nbd1", 00:06:26.679 "bdev_name": "Malloc1" 00:06:26.679 } 00:06:26.679 ]' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.679 /dev/nbd1' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.679 /dev/nbd1' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.679 256+0 records in 00:06:26.679 256+0 records out 00:06:26.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490703 s, 214 MB/s 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.679 256+0 records in 00:06:26.679 256+0 records out 00:06:26.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211501 s, 49.6 MB/s 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.679 19:36:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.679 256+0 records in 00:06:26.679 256+0 records out 00:06:26.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285961 s, 36.7 MB/s 00:06:26.679 19:36:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.679 19:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.680 19:36:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.938 19:36:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.196 19:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.196 19:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.196 19:36:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.196 19:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.196 19:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.196 19:36:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.197 19:36:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.197 19:36:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.197 19:36:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.197 19:36:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.197 19:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.455 19:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.455 19:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.456 19:36:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.456 19:36:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.714 19:36:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.972 [2024-07-25 19:36:37.344631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.232 [2024-07-25 19:36:37.433033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.232 [2024-07-25 19:36:37.433037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.232 [2024-07-25 19:36:37.491174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.232 [2024-07-25 19:36:37.491238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.771 19:36:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.771 19:36:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:30.771 spdk_app_start Round 2 00:06:30.771 19:36:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3849952 /var/tmp/spdk-nbd.sock 00:06:30.771 19:36:40 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3849952 ']' 00:06:30.771 19:36:40 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.771 19:36:40 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.771 19:36:40 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.771 19:36:40 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.771 19:36:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.029 19:36:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.029 19:36:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:31.029 19:36:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.288 Malloc0 00:06:31.288 19:36:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.546 Malloc1 00:06:31.546 19:36:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.546 19:36:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.547 19:36:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.547 19:36:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.805 /dev/nbd0 00:06:31.805 19:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.805 19:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.805 1+0 records in 00:06:31.805 1+0 records out 00:06:31.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210785 s, 19.4 MB/s 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:31.805 19:36:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:31.805 19:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.805 19:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.805 19:36:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.063 /dev/nbd1 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.063 1+0 records in 00:06:32.063 1+0 records out 00:06:32.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018092 s, 22.6 MB/s 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.063 19:36:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.063 19:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.322 { 00:06:32.322 "nbd_device": "/dev/nbd0", 00:06:32.322 "bdev_name": "Malloc0" 00:06:32.322 }, 00:06:32.322 { 00:06:32.322 "nbd_device": "/dev/nbd1", 00:06:32.322 "bdev_name": "Malloc1" 00:06:32.322 } 00:06:32.322 ]' 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.322 { 00:06:32.322 "nbd_device": "/dev/nbd0", 00:06:32.322 "bdev_name": "Malloc0" 00:06:32.322 }, 00:06:32.322 { 00:06:32.322 "nbd_device": "/dev/nbd1", 00:06:32.322 "bdev_name": "Malloc1" 00:06:32.322 } 00:06:32.322 ]' 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.322 /dev/nbd1' 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.322 /dev/nbd1' 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.322 19:36:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.580 256+0 records in 00:06:32.580 256+0 records out 00:06:32.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506356 s, 207 MB/s 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.580 19:36:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.580 256+0 records in 00:06:32.580 256+0 records out 00:06:32.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026438 s, 39.7 MB/s 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.581 256+0 records in 00:06:32.581 256+0 records out 00:06:32.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231619 s, 45.3 MB/s 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.581 19:36:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.839 19:36:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.097 19:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.355 19:36:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.355 19:36:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.615 19:36:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.875 [2024-07-25 19:36:43.150252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.875 [2024-07-25 19:36:43.238515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.875 [2024-07-25 19:36:43.238520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.875 [2024-07-25 19:36:43.299804] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.875 [2024-07-25 19:36:43.299877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.166 19:36:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3849952 /var/tmp/spdk-nbd.sock 00:06:37.166 19:36:45 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3849952 ']' 00:06:37.166 19:36:45 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.166 19:36:45 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.166 19:36:45 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.166 19:36:45 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.166 19:36:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:37.166 19:36:46 event.app_repeat -- event/event.sh@39 -- # killprocess 3849952 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3849952 ']' 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3849952 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3849952 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3849952' 00:06:37.166 killing process with pid 3849952 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3849952 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3849952 00:06:37.166 spdk_app_start is called in Round 0. 00:06:37.166 Shutdown signal received, stop current app iteration 00:06:37.166 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 reinitialization... 00:06:37.166 spdk_app_start is called in Round 1. 00:06:37.166 Shutdown signal received, stop current app iteration 00:06:37.166 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 reinitialization... 00:06:37.166 spdk_app_start is called in Round 2. 00:06:37.166 Shutdown signal received, stop current app iteration 00:06:37.166 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 reinitialization... 00:06:37.166 spdk_app_start is called in Round 3. 00:06:37.166 Shutdown signal received, stop current app iteration 00:06:37.166 19:36:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:37.166 19:36:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:37.166 00:06:37.166 real 0m17.916s 00:06:37.166 user 0m38.970s 00:06:37.166 sys 0m3.246s 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.166 19:36:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 ************************************ 00:06:37.166 END TEST app_repeat 00:06:37.166 ************************************ 00:06:37.166 19:36:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:37.166 19:36:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.166 19:36:46 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.166 19:36:46 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.166 19:36:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 ************************************ 00:06:37.166 START TEST cpu_locks 00:06:37.166 ************************************ 00:06:37.166 19:36:46 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.166 * Looking for test storage... 00:06:37.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:37.166 19:36:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.166 19:36:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.166 19:36:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.166 19:36:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.166 19:36:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.166 19:36:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.166 19:36:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 ************************************ 00:06:37.166 START TEST default_locks 00:06:37.166 ************************************ 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3852301 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3852301 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3852301 ']' 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.166 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 [2024-07-25 19:36:46.601904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:37.427 [2024-07-25 19:36:46.602000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852301 ] 00:06:37.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.427 [2024-07-25 19:36:46.660554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.427 [2024-07-25 19:36:46.745741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.686 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.686 19:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:37.686 19:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3852301 00:06:37.686 19:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3852301 00:06:37.686 19:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.946 lslocks: write error 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3852301 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3852301 ']' 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3852301 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3852301 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3852301' 00:06:37.946 killing process with pid 3852301 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3852301 00:06:37.946 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3852301 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3852301 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3852301 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3852301 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3852301 ']' 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3852301) - No such process 00:06:38.517 ERROR: process (pid: 3852301) is no longer running 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.517 00:06:38.517 real 0m1.130s 00:06:38.517 user 0m1.057s 00:06:38.517 sys 0m0.527s 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.517 19:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.517 ************************************ 00:06:38.517 END TEST default_locks 00:06:38.517 ************************************ 00:06:38.517 19:36:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.517 19:36:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.517 19:36:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.517 19:36:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.517 ************************************ 00:06:38.517 START TEST default_locks_via_rpc 00:06:38.517 ************************************ 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3852467 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3852467 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3852467 ']' 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.517 19:36:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.517 [2024-07-25 19:36:47.778791] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:38.517 [2024-07-25 19:36:47.778896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852467 ] 00:06:38.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.517 [2024-07-25 19:36:47.836166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.517 [2024-07-25 19:36:47.924475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3852467 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3852467 00:06:38.777 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3852467 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3852467 ']' 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3852467 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3852467 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3852467' 00:06:39.346 killing process with pid 3852467 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3852467 00:06:39.346 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3852467 00:06:39.607 00:06:39.607 real 0m1.189s 00:06:39.607 user 0m1.128s 00:06:39.607 sys 0m0.514s 00:06:39.607 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.607 19:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.607 ************************************ 00:06:39.607 END TEST default_locks_via_rpc 00:06:39.607 ************************************ 00:06:39.607 19:36:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:39.607 19:36:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.607 19:36:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.607 19:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.607 ************************************ 00:06:39.607 START TEST non_locking_app_on_locked_coremask 00:06:39.607 ************************************ 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3852629 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3852629 /var/tmp/spdk.sock 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3852629 ']' 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.607 19:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.607 [2024-07-25 19:36:49.014961] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:39.607 [2024-07-25 19:36:49.015076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852629 ] 00:06:39.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.867 [2024-07-25 19:36:49.073773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.867 [2024-07-25 19:36:49.162073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3852651 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3852651 /var/tmp/spdk2.sock 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3852651 ']' 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.126 19:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.126 [2024-07-25 19:36:49.465667] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:40.126 [2024-07-25 19:36:49.465759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852651 ] 00:06:40.126 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.385 [2024-07-25 19:36:49.561207] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.385 [2024-07-25 19:36:49.561241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.385 [2024-07-25 19:36:49.743902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.320 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.320 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:41.320 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3852629 00:06:41.320 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3852629 00:06:41.320 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.580 lslocks: write error 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3852629 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3852629 ']' 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3852629 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3852629 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3852629' 00:06:41.580 killing process with pid 3852629 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3852629 00:06:41.580 19:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3852629 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3852651 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3852651 ']' 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3852651 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3852651 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3852651' 00:06:42.521 killing process with pid 3852651 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3852651 00:06:42.521 19:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3852651 00:06:42.779 00:06:42.779 real 0m3.162s 00:06:42.779 user 0m3.302s 00:06:42.779 sys 0m1.041s 00:06:42.779 19:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.779 19:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 ************************************ 00:06:42.780 END TEST non_locking_app_on_locked_coremask 00:06:42.780 ************************************ 00:06:42.780 19:36:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.780 19:36:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.780 19:36:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.780 19:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 ************************************ 00:06:42.780 START TEST locking_app_on_unlocked_coremask 00:06:42.780 ************************************ 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3853059 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3853059 /var/tmp/spdk.sock 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3853059 ']' 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.780 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.039 [2024-07-25 19:36:52.229650] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:43.039 [2024-07-25 19:36:52.229728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853059 ] 00:06:43.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.039 [2024-07-25 19:36:52.292235] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.039 [2024-07-25 19:36:52.292272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.039 [2024-07-25 19:36:52.381992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.297 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.297 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3853073 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3853073 /var/tmp/spdk2.sock 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3853073 ']' 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.298 19:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.298 [2024-07-25 19:36:52.689597] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:43.298 [2024-07-25 19:36:52.689676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853073 ] 00:06:43.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.558 [2024-07-25 19:36:52.786787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.558 [2024-07-25 19:36:52.965815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.496 19:36:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.496 19:36:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:44.496 19:36:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3853073 00:06:44.496 19:36:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3853073 00:06:44.496 19:36:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.755 lslocks: write error 00:06:44.755 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3853059 00:06:44.755 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3853059 ']' 00:06:44.755 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3853059 00:06:44.755 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:44.755 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.756 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853059 00:06:44.756 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.756 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.756 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853059' 00:06:44.756 killing process with pid 3853059 00:06:44.756 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3853059 00:06:44.756 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3853059 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3853073 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3853073 ']' 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3853073 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853073 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853073' 00:06:45.728 killing process with pid 3853073 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3853073 00:06:45.728 19:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3853073 00:06:45.987 00:06:45.987 real 0m3.107s 00:06:45.987 user 0m3.244s 00:06:45.987 sys 0m1.049s 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.987 ************************************ 00:06:45.987 END TEST locking_app_on_unlocked_coremask 00:06:45.987 ************************************ 00:06:45.987 19:36:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.987 19:36:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.987 19:36:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.987 19:36:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.987 ************************************ 00:06:45.987 START TEST locking_app_on_locked_coremask 00:06:45.987 ************************************ 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3853495 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3853495 /var/tmp/spdk.sock 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3853495 ']' 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.987 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.987 [2024-07-25 19:36:55.385337] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:45.987 [2024-07-25 19:36:55.385455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853495 ] 00:06:45.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.247 [2024-07-25 19:36:55.445254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.247 [2024-07-25 19:36:55.533720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.505 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.505 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:46.505 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3853506 00:06:46.505 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3853506 /var/tmp/spdk2.sock 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3853506 /var/tmp/spdk2.sock 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3853506 /var/tmp/spdk2.sock 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3853506 ']' 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.506 19:36:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.506 [2024-07-25 19:36:55.836807] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:46.506 [2024-07-25 19:36:55.836897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853506 ] 00:06:46.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.506 [2024-07-25 19:36:55.933440] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3853495 has claimed it. 00:06:46.506 [2024-07-25 19:36:55.933501] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3853506) - No such process 00:06:47.442 ERROR: process (pid: 3853506) is no longer running 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3853495 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3853495 00:06:47.442 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.702 lslocks: write error 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3853495 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3853495 ']' 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3853495 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853495 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853495' 00:06:47.702 killing process with pid 3853495 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3853495 00:06:47.702 19:36:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3853495 00:06:48.270 00:06:48.270 real 0m2.064s 00:06:48.270 user 0m2.192s 00:06:48.270 sys 0m0.674s 00:06:48.270 19:36:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.270 19:36:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.270 ************************************ 00:06:48.270 END TEST locking_app_on_locked_coremask 00:06:48.270 ************************************ 00:06:48.270 19:36:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.270 19:36:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.270 19:36:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.270 19:36:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.270 ************************************ 00:06:48.270 START TEST locking_overlapped_coremask 00:06:48.270 ************************************ 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3853798 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3853798 /var/tmp/spdk.sock 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3853798 ']' 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.270 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.270 [2024-07-25 19:36:57.500268] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:48.270 [2024-07-25 19:36:57.500342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853798 ] 00:06:48.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.270 [2024-07-25 19:36:57.563012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.270 [2024-07-25 19:36:57.653875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.270 [2024-07-25 19:36:57.653943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.270 [2024-07-25 19:36:57.653946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3853804 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3853804 /var/tmp/spdk2.sock 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3853804 /var/tmp/spdk2.sock 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3853804 /var/tmp/spdk2.sock 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3853804 ']' 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.528 19:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.528 [2024-07-25 19:36:57.956489] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:48.528 [2024-07-25 19:36:57.956596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853804 ] 00:06:48.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.788 [2024-07-25 19:36:58.045556] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3853798 has claimed it. 00:06:48.788 [2024-07-25 19:36:58.045625] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3853804) - No such process 00:06:49.356 ERROR: process (pid: 3853804) is no longer running 00:06:49.356 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.356 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:49.356 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:49.356 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3853798 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3853798 ']' 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3853798 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853798 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853798' 00:06:49.357 killing process with pid 3853798 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3853798 00:06:49.357 19:36:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3853798 00:06:49.925 00:06:49.925 real 0m1.640s 00:06:49.925 user 0m4.415s 00:06:49.925 sys 0m0.452s 00:06:49.925 19:36:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.925 19:36:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.925 ************************************ 00:06:49.925 END TEST locking_overlapped_coremask 00:06:49.925 ************************************ 00:06:49.925 19:36:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.925 19:36:59 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.925 19:36:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.925 19:36:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.925 ************************************ 00:06:49.925 START TEST locking_overlapped_coremask_via_rpc 00:06:49.925 ************************************ 00:06:49.925 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:49.925 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3853966 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3853966 /var/tmp/spdk.sock 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3853966 ']' 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.926 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.926 [2024-07-25 19:36:59.192621] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:49.926 [2024-07-25 19:36:59.192697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853966 ] 00:06:49.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.926 [2024-07-25 19:36:59.256403] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.926 [2024-07-25 19:36:59.256440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.926 [2024-07-25 19:36:59.350189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.926 [2024-07-25 19:36:59.350240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.926 [2024-07-25 19:36:59.350244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3854054 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3854054 /var/tmp/spdk2.sock 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3854054 ']' 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.184 19:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.444 [2024-07-25 19:36:59.655379] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:50.444 [2024-07-25 19:36:59.655489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854054 ] 00:06:50.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.444 [2024-07-25 19:36:59.770608] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.444 [2024-07-25 19:36:59.770651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.704 [2024-07-25 19:36:59.955859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.704 [2024-07-25 19:36:59.959094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.704 [2024-07-25 19:36:59.959096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.270 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.271 [2024-07-25 19:37:00.605157] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3853966 has claimed it. 00:06:51.271 request: 00:06:51.271 { 00:06:51.271 "method": "framework_enable_cpumask_locks", 00:06:51.271 "req_id": 1 00:06:51.271 } 00:06:51.271 Got JSON-RPC error response 00:06:51.271 response: 00:06:51.271 { 00:06:51.271 "code": -32603, 00:06:51.271 "message": "Failed to claim CPU core: 2" 00:06:51.271 } 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3853966 /var/tmp/spdk.sock 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3853966 ']' 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.271 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3854054 /var/tmp/spdk2.sock 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3854054 ']' 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.529 19:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.788 00:06:51.788 real 0m1.973s 00:06:51.788 user 0m0.989s 00:06:51.788 sys 0m0.196s 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.788 19:37:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 ************************************ 00:06:51.788 END TEST locking_overlapped_coremask_via_rpc 00:06:51.788 ************************************ 00:06:51.788 19:37:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.788 19:37:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3853966 ]] 00:06:51.788 19:37:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3853966 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3853966 ']' 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3853966 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853966 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853966' 00:06:51.788 killing process with pid 3853966 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3853966 00:06:51.788 19:37:01 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3853966 00:06:52.355 19:37:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3854054 ]] 00:06:52.355 19:37:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3854054 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3854054 ']' 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3854054 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3854054 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3854054' 00:06:52.355 killing process with pid 3854054 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3854054 00:06:52.355 19:37:01 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3854054 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3853966 ]] 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3853966 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3853966 ']' 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3853966 00:06:52.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3853966) - No such process 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3853966 is not found' 00:06:52.613 Process with pid 3853966 is not found 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3854054 ]] 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3854054 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3854054 ']' 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3854054 00:06:52.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3854054) - No such process 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3854054 is not found' 00:06:52.613 Process with pid 3854054 is not found 00:06:52.613 19:37:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.613 00:06:52.613 real 0m15.521s 00:06:52.613 user 0m26.980s 00:06:52.613 sys 0m5.368s 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.613 19:37:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 ************************************ 00:06:52.613 END TEST cpu_locks 00:06:52.613 ************************************ 00:06:52.613 00:06:52.613 real 0m41.369s 00:06:52.613 user 1m18.401s 00:06:52.613 sys 0m9.464s 00:06:52.613 19:37:02 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.613 19:37:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 ************************************ 00:06:52.613 END TEST event 00:06:52.613 ************************************ 00:06:52.613 19:37:02 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:52.613 19:37:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.613 19:37:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.613 19:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:52.872 ************************************ 00:06:52.872 START TEST thread 00:06:52.872 ************************************ 00:06:52.872 19:37:02 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:52.872 * Looking for test storage... 00:06:52.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:52.872 19:37:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.872 19:37:02 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:52.872 19:37:02 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.872 19:37:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.872 ************************************ 00:06:52.872 START TEST thread_poller_perf 00:06:52.872 ************************************ 00:06:52.872 19:37:02 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.872 [2024-07-25 19:37:02.143947] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:52.872 [2024-07-25 19:37:02.144014] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854465 ] 00:06:52.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.873 [2024-07-25 19:37:02.206605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.132 [2024-07-25 19:37:02.303512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.132 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.069 ====================================== 00:06:54.069 busy:2711797603 (cyc) 00:06:54.069 total_run_count: 298000 00:06:54.069 tsc_hz: 2700000000 (cyc) 00:06:54.069 ====================================== 00:06:54.069 poller_cost: 9099 (cyc), 3370 (nsec) 00:06:54.069 00:06:54.069 real 0m1.262s 00:06:54.069 user 0m1.180s 00:06:54.069 sys 0m0.077s 00:06:54.069 19:37:03 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.069 19:37:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.069 ************************************ 00:06:54.069 END TEST thread_poller_perf 00:06:54.069 ************************************ 00:06:54.069 19:37:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.069 19:37:03 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:54.069 19:37:03 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.069 19:37:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.069 ************************************ 00:06:54.069 START TEST thread_poller_perf 00:06:54.069 ************************************ 00:06:54.069 19:37:03 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.069 [2024-07-25 19:37:03.456310] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:54.069 [2024-07-25 19:37:03.456370] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854619 ] 00:06:54.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.328 [2024-07-25 19:37:03.518910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.328 [2024-07-25 19:37:03.611458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.328 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.264 ====================================== 00:06:55.264 busy:2702775968 (cyc) 00:06:55.264 total_run_count: 3856000 00:06:55.264 tsc_hz: 2700000000 (cyc) 00:06:55.264 ====================================== 00:06:55.264 poller_cost: 700 (cyc), 259 (nsec) 00:06:55.522 00:06:55.522 real 0m1.253s 00:06:55.522 user 0m1.169s 00:06:55.522 sys 0m0.078s 00:06:55.522 19:37:04 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.522 19:37:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.522 ************************************ 00:06:55.522 END TEST thread_poller_perf 00:06:55.522 ************************************ 00:06:55.522 19:37:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.522 00:06:55.522 real 0m2.654s 00:06:55.522 user 0m2.417s 00:06:55.522 sys 0m0.236s 00:06:55.522 19:37:04 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.522 19:37:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.522 ************************************ 00:06:55.522 END TEST thread 00:06:55.522 ************************************ 00:06:55.522 19:37:04 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.522 19:37:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.522 19:37:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.522 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:06:55.522 ************************************ 00:06:55.522 START TEST accel 00:06:55.522 ************************************ 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.522 * Looking for test storage... 00:06:55.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:55.522 19:37:04 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:55.522 19:37:04 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:55.522 19:37:04 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.522 19:37:04 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3854816 00:06:55.522 19:37:04 accel -- accel/accel.sh@63 -- # waitforlisten 3854816 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@827 -- # '[' -z 3854816 ']' 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.522 19:37:04 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:55.522 19:37:04 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.522 19:37:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.522 19:37:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.522 19:37:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.522 19:37:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.522 19:37:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.522 19:37:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.522 19:37:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:55.522 19:37:04 accel -- accel/accel.sh@41 -- # jq -r . 00:06:55.522 [2024-07-25 19:37:04.872561] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:55.522 [2024-07-25 19:37:04.872647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854816 ] 00:06:55.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.522 [2024-07-25 19:37:04.938794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.779 [2024-07-25 19:37:05.031791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@860 -- # return 0 00:06:56.037 19:37:05 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:56.037 19:37:05 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:56.037 19:37:05 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:56.037 19:37:05 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:56.037 19:37:05 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:56.037 19:37:05 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.037 19:37:05 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.037 19:37:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.037 19:37:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.037 19:37:05 accel -- accel/accel.sh@75 -- # killprocess 3854816 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@946 -- # '[' -z 3854816 ']' 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@950 -- # kill -0 3854816 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@951 -- # uname 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3854816 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3854816' 00:06:56.037 killing process with pid 3854816 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@965 -- # kill 3854816 00:06:56.037 19:37:05 accel -- common/autotest_common.sh@970 -- # wait 3854816 00:06:56.606 19:37:05 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:56.606 19:37:05 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:56.606 19:37:05 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:56.606 19:37:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.606 19:37:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.606 19:37:05 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:56.606 19:37:05 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:56.606 19:37:05 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.606 19:37:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:56.606 19:37:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:56.606 19:37:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.606 19:37:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.606 19:37:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.606 ************************************ 00:06:56.606 START TEST accel_missing_filename 00:06:56.606 ************************************ 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.606 19:37:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:56.606 19:37:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:56.607 [2024-07-25 19:37:05.863948] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:56.607 [2024-07-25 19:37:05.864015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854986 ] 00:06:56.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.607 [2024-07-25 19:37:05.927979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.607 [2024-07-25 19:37:06.019626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.865 [2024-07-25 19:37:06.081535] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.865 [2024-07-25 19:37:06.162638] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:56.865 A filename is required. 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.865 00:06:56.865 real 0m0.400s 00:06:56.865 user 0m0.291s 00:06:56.865 sys 0m0.143s 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.865 19:37:06 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:56.865 ************************************ 00:06:56.865 END TEST accel_missing_filename 00:06:56.865 ************************************ 00:06:56.865 19:37:06 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.865 19:37:06 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:56.865 19:37:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.865 19:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.123 ************************************ 00:06:57.123 START TEST accel_compress_verify 00:06:57.123 ************************************ 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.123 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:57.123 19:37:06 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:57.123 [2024-07-25 19:37:06.316877] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:57.123 [2024-07-25 19:37:06.316943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855012 ] 00:06:57.123 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.123 [2024-07-25 19:37:06.384699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.123 [2024-07-25 19:37:06.479198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.123 [2024-07-25 19:37:06.541183] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.383 [2024-07-25 19:37:06.619162] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:57.383 00:06:57.383 Compression does not support the verify option, aborting. 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.383 00:06:57.383 real 0m0.399s 00:06:57.383 user 0m0.282s 00:06:57.383 sys 0m0.152s 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.383 19:37:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:57.383 ************************************ 00:06:57.383 END TEST accel_compress_verify 00:06:57.383 ************************************ 00:06:57.383 19:37:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:57.383 19:37:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.383 19:37:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.383 19:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.383 ************************************ 00:06:57.383 START TEST accel_wrong_workload 00:06:57.383 ************************************ 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.383 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:57.383 19:37:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:57.383 Unsupported workload type: foobar 00:06:57.383 [2024-07-25 19:37:06.763876] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:57.383 accel_perf options: 00:06:57.383 [-h help message] 00:06:57.383 [-q queue depth per core] 00:06:57.383 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.383 [-T number of threads per core 00:06:57.383 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.383 [-t time in seconds] 00:06:57.383 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.383 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:57.383 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.383 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.383 [-S for crc32c workload, use this seed value (default 0) 00:06:57.383 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.384 [-f for fill workload, use this BYTE value (default 255) 00:06:57.384 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.384 [-y verify result if this switch is on] 00:06:57.384 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.384 Can be used to spread operations across a wider range of memory. 00:06:57.384 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:57.384 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.384 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.384 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.384 00:06:57.384 real 0m0.024s 00:06:57.384 user 0m0.013s 00:06:57.384 sys 0m0.011s 00:06:57.384 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.384 19:37:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:57.384 ************************************ 00:06:57.384 END TEST accel_wrong_workload 00:06:57.384 ************************************ 00:06:57.384 Error: writing output failed: Broken pipe 00:06:57.384 19:37:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.384 19:37:06 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:57.384 19:37:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.384 19:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.384 ************************************ 00:06:57.384 START TEST accel_negative_buffers 00:06:57.384 ************************************ 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.384 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:57.384 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:57.384 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:57.644 19:37:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:57.644 -x option must be non-negative. 00:06:57.644 [2024-07-25 19:37:06.827162] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:57.644 accel_perf options: 00:06:57.644 [-h help message] 00:06:57.644 [-q queue depth per core] 00:06:57.644 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.644 [-T number of threads per core 00:06:57.644 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.644 [-t time in seconds] 00:06:57.644 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.644 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:57.644 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.644 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.644 [-S for crc32c workload, use this seed value (default 0) 00:06:57.644 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.644 [-f for fill workload, use this BYTE value (default 255) 00:06:57.644 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.644 [-y verify result if this switch is on] 00:06:57.644 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.644 Can be used to spread operations across a wider range of memory. 00:06:57.644 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:57.644 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.644 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.644 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.644 00:06:57.644 real 0m0.022s 00:06:57.644 user 0m0.013s 00:06:57.644 sys 0m0.009s 00:06:57.644 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.644 19:37:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:57.644 ************************************ 00:06:57.644 END TEST accel_negative_buffers 00:06:57.644 ************************************ 00:06:57.644 Error: writing output failed: Broken pipe 00:06:57.644 19:37:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:57.644 19:37:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:57.644 19:37:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.644 19:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.644 ************************************ 00:06:57.644 START TEST accel_crc32c 00:06:57.644 ************************************ 00:06:57.644 19:37:06 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:57.644 19:37:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:57.644 [2024-07-25 19:37:06.896813] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:57.644 [2024-07-25 19:37:06.896878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855197 ] 00:06:57.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.644 [2024-07-25 19:37:06.960882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.644 [2024-07-25 19:37:07.053210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.904 19:37:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:59.284 19:37:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.284 00:06:59.284 real 0m1.413s 00:06:59.284 user 0m1.273s 00:06:59.284 sys 0m0.143s 00:06:59.284 19:37:08 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.284 19:37:08 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 ************************************ 00:06:59.284 END TEST accel_crc32c 00:06:59.284 ************************************ 00:06:59.284 19:37:08 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:59.284 19:37:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:59.284 19:37:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.284 19:37:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 ************************************ 00:06:59.284 START TEST accel_crc32c_C2 00:06:59.284 ************************************ 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:59.284 [2024-07-25 19:37:08.351882] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:59.284 [2024-07-25 19:37:08.351943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855350 ] 00:06:59.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.284 [2024-07-25 19:37:08.414889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.284 [2024-07-25 19:37:08.506769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.284 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.285 19:37:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.661 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.662 00:07:00.662 real 0m1.400s 00:07:00.662 user 0m1.254s 00:07:00.662 sys 0m0.147s 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.662 19:37:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:00.662 ************************************ 00:07:00.662 END TEST accel_crc32c_C2 00:07:00.662 ************************************ 00:07:00.662 19:37:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:00.662 19:37:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.662 19:37:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.662 19:37:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.662 ************************************ 00:07:00.662 START TEST accel_copy 00:07:00.662 ************************************ 00:07:00.662 19:37:09 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:00.662 19:37:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:00.662 [2024-07-25 19:37:09.795978] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:00.662 [2024-07-25 19:37:09.796043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855573 ] 00:07:00.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.662 [2024-07-25 19:37:09.857028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.662 [2024-07-25 19:37:09.949465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.662 19:37:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:02.041 19:37:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.041 00:07:02.041 real 0m1.402s 00:07:02.041 user 0m1.261s 00:07:02.041 sys 0m0.143s 00:07:02.041 19:37:11 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.041 19:37:11 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.041 ************************************ 00:07:02.041 END TEST accel_copy 00:07:02.041 ************************************ 00:07:02.041 19:37:11 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.041 19:37:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:02.041 19:37:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.041 19:37:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.041 ************************************ 00:07:02.041 START TEST accel_fill 00:07:02.041 ************************************ 00:07:02.041 19:37:11 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:02.041 [2024-07-25 19:37:11.247807] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:02.041 [2024-07-25 19:37:11.247870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855784 ] 00:07:02.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.041 [2024-07-25 19:37:11.311725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.041 [2024-07-25 19:37:11.403743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.041 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.042 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.302 19:37:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.240 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:03.241 19:37:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.241 00:07:03.241 real 0m1.413s 00:07:03.241 user 0m1.266s 00:07:03.241 sys 0m0.150s 00:07:03.241 19:37:12 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.241 19:37:12 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:03.241 ************************************ 00:07:03.241 END TEST accel_fill 00:07:03.241 ************************************ 00:07:03.241 19:37:12 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:03.241 19:37:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:03.241 19:37:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.241 19:37:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.520 ************************************ 00:07:03.520 START TEST accel_copy_crc32c 00:07:03.520 ************************************ 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.520 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:03.521 [2024-07-25 19:37:12.702553] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:03.521 [2024-07-25 19:37:12.702616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855942 ] 00:07:03.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.521 [2024-07-25 19:37:12.763963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.521 [2024-07-25 19:37:12.856480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.521 19:37:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.909 00:07:04.909 real 0m1.407s 00:07:04.909 user 0m1.268s 00:07:04.909 sys 0m0.142s 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.909 19:37:14 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:04.909 ************************************ 00:07:04.909 END TEST accel_copy_crc32c 00:07:04.909 ************************************ 00:07:04.909 19:37:14 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.909 19:37:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:04.909 19:37:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.909 19:37:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.909 ************************************ 00:07:04.909 START TEST accel_copy_crc32c_C2 00:07:04.909 ************************************ 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.909 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:04.909 [2024-07-25 19:37:14.153536] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:04.909 [2024-07-25 19:37:14.153600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856099 ] 00:07:04.909 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.909 [2024-07-25 19:37:14.216018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.909 [2024-07-25 19:37:14.311547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.168 19:37:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.549 00:07:06.549 real 0m1.406s 00:07:06.549 user 0m1.265s 00:07:06.549 sys 0m0.143s 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.549 19:37:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:06.549 ************************************ 00:07:06.549 END TEST accel_copy_crc32c_C2 00:07:06.549 ************************************ 00:07:06.549 19:37:15 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:06.549 19:37:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:06.549 19:37:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.549 19:37:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.549 ************************************ 00:07:06.549 START TEST accel_dualcast 00:07:06.549 ************************************ 00:07:06.549 19:37:15 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:06.549 [2024-07-25 19:37:15.603207] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:06.549 [2024-07-25 19:37:15.603266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856370 ] 00:07:06.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.549 [2024-07-25 19:37:15.667563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.549 [2024-07-25 19:37:15.758509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 19:37:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.929 19:37:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.930 19:37:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:07.930 19:37:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.930 00:07:07.930 real 0m1.391s 00:07:07.930 user 0m1.263s 00:07:07.930 sys 0m0.130s 00:07:07.930 19:37:16 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.930 19:37:16 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:07.930 ************************************ 00:07:07.930 END TEST accel_dualcast 00:07:07.930 ************************************ 00:07:07.930 19:37:16 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:07.930 19:37:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:07.930 19:37:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.930 19:37:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.930 ************************************ 00:07:07.930 START TEST accel_compare 00:07:07.930 ************************************ 00:07:07.930 19:37:17 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:07.930 [2024-07-25 19:37:17.038013] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:07.930 [2024-07-25 19:37:17.038079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856532 ] 00:07:07.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.930 [2024-07-25 19:37:17.100587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.930 [2024-07-25 19:37:17.193118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.930 19:37:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:09.309 19:37:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.309 00:07:09.309 real 0m1.409s 00:07:09.309 user 0m1.267s 00:07:09.309 sys 0m0.144s 00:07:09.309 19:37:18 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.309 19:37:18 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:09.309 ************************************ 00:07:09.309 END TEST accel_compare 00:07:09.309 ************************************ 00:07:09.309 19:37:18 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:09.309 19:37:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:09.309 19:37:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.309 19:37:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.309 ************************************ 00:07:09.309 START TEST accel_xor 00:07:09.309 ************************************ 00:07:09.309 19:37:18 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:09.309 19:37:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.309 19:37:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.309 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.309 19:37:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:09.309 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.309 19:37:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.310 [2024-07-25 19:37:18.494538] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:09.310 [2024-07-25 19:37:18.494602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856685 ] 00:07:09.310 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.310 [2024-07-25 19:37:18.557590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.310 [2024-07-25 19:37:18.650234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.310 19:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.693 00:07:10.693 real 0m1.404s 00:07:10.693 user 0m1.269s 00:07:10.693 sys 0m0.137s 00:07:10.693 19:37:19 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.693 19:37:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 ************************************ 00:07:10.693 END TEST accel_xor 00:07:10.693 ************************************ 00:07:10.693 19:37:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:10.693 19:37:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:10.693 19:37:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.693 19:37:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 ************************************ 00:07:10.693 START TEST accel_xor 00:07:10.693 ************************************ 00:07:10.693 19:37:19 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.693 19:37:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.693 [2024-07-25 19:37:19.943478] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:10.693 [2024-07-25 19:37:19.943556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856884 ] 00:07:10.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.693 [2024-07-25 19:37:20.007909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.693 [2024-07-25 19:37:20.109084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.953 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.954 19:37:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.344 19:37:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.344 19:37:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.345 19:37:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.345 00:07:12.345 real 0m1.421s 00:07:12.345 user 0m1.273s 00:07:12.345 sys 0m0.151s 00:07:12.345 19:37:21 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.345 19:37:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.345 ************************************ 00:07:12.345 END TEST accel_xor 00:07:12.345 ************************************ 00:07:12.345 19:37:21 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:12.345 19:37:21 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:12.345 19:37:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.345 19:37:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.345 ************************************ 00:07:12.345 START TEST accel_dif_verify 00:07:12.345 ************************************ 00:07:12.345 19:37:21 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:12.345 [2024-07-25 19:37:21.412861] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:12.345 [2024-07-25 19:37:21.412927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857115 ] 00:07:12.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.345 [2024-07-25 19:37:21.470850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.345 [2024-07-25 19:37:21.558510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.345 19:37:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:13.726 19:37:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.726 00:07:13.726 real 0m1.387s 00:07:13.726 user 0m1.250s 00:07:13.726 sys 0m0.142s 00:07:13.726 19:37:22 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.726 19:37:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.726 ************************************ 00:07:13.726 END TEST accel_dif_verify 00:07:13.726 ************************************ 00:07:13.726 19:37:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:13.726 19:37:22 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:13.726 19:37:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.726 19:37:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.726 ************************************ 00:07:13.726 START TEST accel_dif_generate 00:07:13.726 ************************************ 00:07:13.726 19:37:22 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:13.726 19:37:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:13.726 [2024-07-25 19:37:22.842043] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:13.726 [2024-07-25 19:37:22.842128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857277 ] 00:07:13.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.726 [2024-07-25 19:37:22.904721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.726 [2024-07-25 19:37:22.997502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.726 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.726 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.726 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.726 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.726 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.727 19:37:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:15.106 19:37:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.106 00:07:15.106 real 0m1.413s 00:07:15.106 user 0m1.261s 00:07:15.106 sys 0m0.156s 00:07:15.106 19:37:24 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.106 19:37:24 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:15.106 ************************************ 00:07:15.106 END TEST accel_dif_generate 00:07:15.106 ************************************ 00:07:15.106 19:37:24 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:15.106 19:37:24 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:15.106 19:37:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.106 19:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.106 ************************************ 00:07:15.106 START TEST accel_dif_generate_copy 00:07:15.106 ************************************ 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.106 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.106 [2024-07-25 19:37:24.297590] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:15.106 [2024-07-25 19:37:24.297650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857444 ] 00:07:15.107 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.107 [2024-07-25 19:37:24.358757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.107 [2024-07-25 19:37:24.455731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.107 19:37:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.488 00:07:16.488 real 0m1.401s 00:07:16.488 user 0m1.258s 00:07:16.488 sys 0m0.146s 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.488 19:37:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.488 ************************************ 00:07:16.488 END TEST accel_dif_generate_copy 00:07:16.488 ************************************ 00:07:16.488 19:37:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:16.488 19:37:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.488 19:37:25 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:16.488 19:37:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.488 19:37:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.488 ************************************ 00:07:16.488 START TEST accel_comp 00:07:16.488 ************************************ 00:07:16.488 19:37:25 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.488 19:37:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.489 19:37:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.489 19:37:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.489 19:37:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.489 19:37:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:16.489 [2024-07-25 19:37:25.741920] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:16.489 [2024-07-25 19:37:25.741986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857717 ] 00:07:16.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.489 [2024-07-25 19:37:25.806679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.489 [2024-07-25 19:37:25.899267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.748 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.749 19:37:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:18.128 19:37:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.128 00:07:18.128 real 0m1.417s 00:07:18.128 user 0m1.266s 00:07:18.128 sys 0m0.154s 00:07:18.128 19:37:27 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.128 19:37:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:18.128 ************************************ 00:07:18.128 END TEST accel_comp 00:07:18.128 ************************************ 00:07:18.128 19:37:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.128 19:37:27 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:18.128 19:37:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.128 19:37:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.129 ************************************ 00:07:18.129 START TEST accel_decomp 00:07:18.129 ************************************ 00:07:18.129 19:37:27 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:18.129 [2024-07-25 19:37:27.198763] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:18.129 [2024-07-25 19:37:27.198828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857880 ] 00:07:18.129 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.129 [2024-07-25 19:37:27.259804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.129 [2024-07-25 19:37:27.351031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.129 19:37:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.511 19:37:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.512 19:37:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.512 00:07:19.512 real 0m1.408s 00:07:19.512 user 0m1.273s 00:07:19.512 sys 0m0.138s 00:07:19.512 19:37:28 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.512 19:37:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:19.512 ************************************ 00:07:19.512 END TEST accel_decomp 00:07:19.512 ************************************ 00:07:19.512 19:37:28 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.512 19:37:28 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:19.512 19:37:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.512 19:37:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.512 ************************************ 00:07:19.512 START TEST accel_decmop_full 00:07:19.512 ************************************ 00:07:19.512 19:37:28 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:19.512 [2024-07-25 19:37:28.650208] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:19.512 [2024-07-25 19:37:28.650263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858032 ] 00:07:19.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.512 [2024-07-25 19:37:28.713710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.512 [2024-07-25 19:37:28.806185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.512 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.513 19:37:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.890 19:37:30 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.890 00:07:20.890 real 0m1.429s 00:07:20.890 user 0m1.281s 00:07:20.890 sys 0m0.151s 00:07:20.890 19:37:30 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.890 19:37:30 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:20.890 ************************************ 00:07:20.891 END TEST accel_decmop_full 00:07:20.891 ************************************ 00:07:20.891 19:37:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.891 19:37:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:20.891 19:37:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.891 19:37:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.891 ************************************ 00:07:20.891 START TEST accel_decomp_mcore 00:07:20.891 ************************************ 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:20.891 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:20.891 [2024-07-25 19:37:30.122626] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:20.891 [2024-07-25 19:37:30.122686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858306 ] 00:07:20.891 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.891 [2024-07-25 19:37:30.185382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.891 [2024-07-25 19:37:30.280567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.891 [2024-07-25 19:37:30.280621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.891 [2024-07-25 19:37:30.280738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.891 [2024-07-25 19:37:30.280741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.150 19:37:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.087 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.088 00:07:22.088 real 0m1.404s 00:07:22.088 user 0m4.680s 00:07:22.088 sys 0m0.151s 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.088 19:37:31 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.088 ************************************ 00:07:22.088 END TEST accel_decomp_mcore 00:07:22.088 ************************************ 00:07:22.345 19:37:31 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.345 19:37:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:22.345 19:37:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.345 19:37:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.345 ************************************ 00:07:22.345 START TEST accel_decomp_full_mcore 00:07:22.345 ************************************ 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.345 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.345 [2024-07-25 19:37:31.571950] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:22.345 [2024-07-25 19:37:31.572017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858465 ] 00:07:22.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.345 [2024-07-25 19:37:31.633274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.345 [2024-07-25 19:37:31.729057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.345 [2024-07-25 19:37:31.729112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.345 [2024-07-25 19:37:31.729232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.345 [2024-07-25 19:37:31.729234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.604 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.605 19:37:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.557 00:07:23.557 real 0m1.412s 00:07:23.557 user 0m4.732s 00:07:23.557 sys 0m0.146s 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.557 19:37:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:23.557 ************************************ 00:07:23.557 END TEST accel_decomp_full_mcore 00:07:23.557 ************************************ 00:07:23.816 19:37:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.816 19:37:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:23.816 19:37:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.816 19:37:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.816 ************************************ 00:07:23.816 START TEST accel_decomp_mthread 00:07:23.816 ************************************ 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:23.816 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:23.816 [2024-07-25 19:37:33.033598] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:23.816 [2024-07-25 19:37:33.033661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858634 ] 00:07:23.816 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.816 [2024-07-25 19:37:33.095069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.816 [2024-07-25 19:37:33.188022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.074 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.074 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.074 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.074 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.075 19:37:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.010 00:07:25.010 real 0m1.420s 00:07:25.010 user 0m1.278s 00:07:25.010 sys 0m0.146s 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.010 19:37:34 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.010 ************************************ 00:07:25.010 END TEST accel_decomp_mthread 00:07:25.010 ************************************ 00:07:25.270 19:37:34 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.270 19:37:34 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.270 19:37:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.270 19:37:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.270 ************************************ 00:07:25.270 START TEST accel_decomp_full_mthread 00:07:25.271 ************************************ 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.271 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.271 [2024-07-25 19:37:34.500334] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:25.271 [2024-07-25 19:37:34.500412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858785 ] 00:07:25.271 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.271 [2024-07-25 19:37:34.562563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.271 [2024-07-25 19:37:34.656754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.531 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.532 19:37:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.913 00:07:26.913 real 0m1.446s 00:07:26.913 user 0m1.297s 00:07:26.913 sys 0m0.151s 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.913 19:37:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:26.913 ************************************ 00:07:26.913 END TEST accel_decomp_full_mthread 00:07:26.913 ************************************ 00:07:26.913 19:37:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:26.913 19:37:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:26.914 19:37:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:26.914 19:37:35 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:26.914 19:37:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.914 19:37:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.914 19:37:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.914 19:37:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.914 19:37:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.914 19:37:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.914 19:37:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.914 19:37:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:26.914 19:37:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:26.914 ************************************ 00:07:26.914 START TEST accel_dif_functional_tests 00:07:26.914 ************************************ 00:07:26.914 19:37:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:26.914 [2024-07-25 19:37:36.012601] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:26.914 [2024-07-25 19:37:36.012671] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859062 ] 00:07:26.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.914 [2024-07-25 19:37:36.079642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.914 [2024-07-25 19:37:36.174754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.914 [2024-07-25 19:37:36.174804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.914 [2024-07-25 19:37:36.174807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.914 00:07:26.914 00:07:26.914 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.914 http://cunit.sourceforge.net/ 00:07:26.914 00:07:26.914 00:07:26.914 Suite: accel_dif 00:07:26.914 Test: verify: DIF generated, GUARD check ...passed 00:07:26.914 Test: verify: DIF generated, APPTAG check ...passed 00:07:26.914 Test: verify: DIF generated, REFTAG check ...passed 00:07:26.914 Test: verify: DIF not generated, GUARD check ...[2024-07-25 19:37:36.267201] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.914 passed 00:07:26.914 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 19:37:36.267276] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.914 passed 00:07:26.914 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 19:37:36.267313] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.914 passed 00:07:26.914 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:26.914 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 19:37:36.267381] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:26.914 passed 00:07:26.914 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:26.914 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:26.914 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:26.914 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 19:37:36.267540] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:26.914 passed 00:07:26.914 Test: verify copy: DIF generated, GUARD check ...passed 00:07:26.914 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:26.914 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:26.914 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 19:37:36.267706] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.914 passed 00:07:26.914 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 19:37:36.267745] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.914 passed 00:07:26.914 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 19:37:36.267781] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.914 passed 00:07:26.914 Test: generate copy: DIF generated, GUARD check ...passed 00:07:26.914 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:26.914 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:26.914 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:26.914 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:26.914 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:26.914 Test: generate copy: iovecs-len validate ...[2024-07-25 19:37:36.268028] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:26.914 passed 00:07:26.914 Test: generate copy: buffer alignment validate ...passed 00:07:26.914 00:07:26.914 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.914 suites 1 1 n/a 0 0 00:07:26.914 tests 26 26 26 0 0 00:07:26.914 asserts 115 115 115 0 n/a 00:07:26.914 00:07:26.914 Elapsed time = 0.005 seconds 00:07:27.173 00:07:27.173 real 0m0.493s 00:07:27.173 user 0m0.745s 00:07:27.173 sys 0m0.185s 00:07:27.173 19:37:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.173 19:37:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 END TEST accel_dif_functional_tests 00:07:27.173 ************************************ 00:07:27.173 00:07:27.173 real 0m31.723s 00:07:27.173 user 0m35.051s 00:07:27.173 sys 0m4.606s 00:07:27.173 19:37:36 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.173 19:37:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 END TEST accel 00:07:27.173 ************************************ 00:07:27.173 19:37:36 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:27.173 19:37:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.173 19:37:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.173 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 START TEST accel_rpc 00:07:27.173 ************************************ 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:27.173 * Looking for test storage... 00:07:27.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:27.173 19:37:36 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.173 19:37:36 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3859131 00:07:27.173 19:37:36 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:27.173 19:37:36 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3859131 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3859131 ']' 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.173 19:37:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.431 [2024-07-25 19:37:36.642291] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:27.431 [2024-07-25 19:37:36.642386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859131 ] 00:07:27.431 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.431 [2024-07-25 19:37:36.699244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.431 [2024-07-25 19:37:36.784270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.431 19:37:36 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.431 19:37:36 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:27.431 19:37:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:27.431 19:37:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:27.431 19:37:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:27.431 19:37:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:27.431 19:37:36 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:27.431 19:37:36 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.431 19:37:36 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.431 19:37:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.689 ************************************ 00:07:27.689 START TEST accel_assign_opcode 00:07:27.689 ************************************ 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.689 [2024-07-25 19:37:36.868958] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.689 [2024-07-25 19:37:36.876965] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.689 19:37:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.689 19:37:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.949 software 00:07:27.949 00:07:27.949 real 0m0.289s 00:07:27.949 user 0m0.037s 00:07:27.949 sys 0m0.006s 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.949 19:37:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.949 ************************************ 00:07:27.949 END TEST accel_assign_opcode 00:07:27.949 ************************************ 00:07:27.949 19:37:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3859131 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3859131 ']' 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3859131 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859131 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859131' 00:07:27.949 killing process with pid 3859131 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@965 -- # kill 3859131 00:07:27.949 19:37:37 accel_rpc -- common/autotest_common.sh@970 -- # wait 3859131 00:07:28.208 00:07:28.208 real 0m1.070s 00:07:28.208 user 0m0.995s 00:07:28.208 sys 0m0.424s 00:07:28.208 19:37:37 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.208 19:37:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.208 ************************************ 00:07:28.208 END TEST accel_rpc 00:07:28.208 ************************************ 00:07:28.208 19:37:37 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.208 19:37:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:28.208 19:37:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.208 19:37:37 -- common/autotest_common.sh@10 -- # set +x 00:07:28.466 ************************************ 00:07:28.466 START TEST app_cmdline 00:07:28.466 ************************************ 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.466 * Looking for test storage... 00:07:28.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.466 19:37:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.466 19:37:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3859335 00:07:28.466 19:37:37 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.466 19:37:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3859335 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3859335 ']' 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.466 19:37:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.466 [2024-07-25 19:37:37.762758] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:28.466 [2024-07-25 19:37:37.762854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859335 ] 00:07:28.466 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.466 [2024-07-25 19:37:37.820202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.726 [2024-07-25 19:37:37.906382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.726 19:37:38 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.726 19:37:38 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:28.726 19:37:38 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.984 { 00:07:28.984 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:28.984 "fields": { 00:07:28.984 "major": 24, 00:07:28.984 "minor": 5, 00:07:28.984 "patch": 1, 00:07:28.984 "suffix": "-pre", 00:07:28.984 "commit": "241d0f3c9" 00:07:28.984 } 00:07:28.984 } 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.984 19:37:38 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.984 19:37:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.984 19:37:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.984 19:37:38 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.242 19:37:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:29.242 19:37:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:29.242 19:37:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.242 19:37:38 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.242 request: 00:07:29.242 { 00:07:29.242 "method": "env_dpdk_get_mem_stats", 00:07:29.242 "req_id": 1 00:07:29.242 } 00:07:29.242 Got JSON-RPC error response 00:07:29.242 response: 00:07:29.242 { 00:07:29.242 "code": -32601, 00:07:29.242 "message": "Method not found" 00:07:29.242 } 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.501 19:37:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3859335 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3859335 ']' 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3859335 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859335 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.501 19:37:38 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.502 19:37:38 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859335' 00:07:29.502 killing process with pid 3859335 00:07:29.502 19:37:38 app_cmdline -- common/autotest_common.sh@965 -- # kill 3859335 00:07:29.502 19:37:38 app_cmdline -- common/autotest_common.sh@970 -- # wait 3859335 00:07:29.760 00:07:29.760 real 0m1.457s 00:07:29.760 user 0m1.771s 00:07:29.760 sys 0m0.463s 00:07:29.760 19:37:39 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.760 19:37:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.760 ************************************ 00:07:29.760 END TEST app_cmdline 00:07:29.760 ************************************ 00:07:29.760 19:37:39 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.760 19:37:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.760 19:37:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.760 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:07:29.760 ************************************ 00:07:29.760 START TEST version 00:07:29.760 ************************************ 00:07:29.760 19:37:39 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:30.018 * Looking for test storage... 00:07:30.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:30.018 19:37:39 version -- app/version.sh@17 -- # get_header_version major 00:07:30.018 19:37:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # cut -f2 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.018 19:37:39 version -- app/version.sh@17 -- # major=24 00:07:30.018 19:37:39 version -- app/version.sh@18 -- # get_header_version minor 00:07:30.018 19:37:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # cut -f2 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.018 19:37:39 version -- app/version.sh@18 -- # minor=5 00:07:30.018 19:37:39 version -- app/version.sh@19 -- # get_header_version patch 00:07:30.018 19:37:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # cut -f2 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.018 19:37:39 version -- app/version.sh@19 -- # patch=1 00:07:30.018 19:37:39 version -- app/version.sh@20 -- # get_header_version suffix 00:07:30.018 19:37:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # cut -f2 00:07:30.018 19:37:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.018 19:37:39 version -- app/version.sh@20 -- # suffix=-pre 00:07:30.018 19:37:39 version -- app/version.sh@22 -- # version=24.5 00:07:30.018 19:37:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:30.018 19:37:39 version -- app/version.sh@25 -- # version=24.5.1 00:07:30.018 19:37:39 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:30.018 19:37:39 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:30.019 19:37:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:30.019 19:37:39 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:30.019 19:37:39 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:30.019 00:07:30.019 real 0m0.100s 00:07:30.019 user 0m0.058s 00:07:30.019 sys 0m0.064s 00:07:30.019 19:37:39 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.019 19:37:39 version -- common/autotest_common.sh@10 -- # set +x 00:07:30.019 ************************************ 00:07:30.019 END TEST version 00:07:30.019 ************************************ 00:07:30.019 19:37:39 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@198 -- # uname -s 00:07:30.019 19:37:39 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:30.019 19:37:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.019 19:37:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.019 19:37:39 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:30.019 19:37:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.019 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:07:30.019 19:37:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:30.019 19:37:39 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:30.019 19:37:39 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.019 19:37:39 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.019 19:37:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.019 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:07:30.019 ************************************ 00:07:30.019 START TEST nvmf_tcp 00:07:30.019 ************************************ 00:07:30.019 19:37:39 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.019 * Looking for test storage... 00:07:30.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.019 19:37:39 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.019 19:37:39 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.019 19:37:39 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.019 19:37:39 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.019 19:37:39 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.019 19:37:39 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.019 19:37:39 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:30.019 19:37:39 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:30.019 19:37:39 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:30.019 19:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:30.019 19:37:39 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:30.019 19:37:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.019 19:37:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.019 19:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.019 ************************************ 00:07:30.019 START TEST nvmf_example 00:07:30.019 ************************************ 00:07:30.019 19:37:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:30.276 * Looking for test storage... 00:07:30.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.276 19:37:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.277 19:37:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.173 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.174 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:07:32.433 00:07:32.433 --- 10.0.0.2 ping statistics --- 00:07:32.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.433 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:07:32.433 00:07:32.433 --- 10.0.0.1 ping statistics --- 00:07:32.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.433 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:32.433 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3861347 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3861347 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3861347 ']' 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.434 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.692 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:32.692 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:32.692 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:32.692 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.693 19:37:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:32.693 19:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:32.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.921 Initializing NVMe Controllers 00:07:44.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:44.921 Initialization complete. Launching workers. 00:07:44.921 ======================================================== 00:07:44.921 Latency(us) 00:07:44.921 Device Information : IOPS MiB/s Average min max 00:07:44.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15144.80 59.16 4226.85 878.01 15239.72 00:07:44.921 ======================================================== 00:07:44.921 Total : 15144.80 59.16 4226.85 878.01 15239.72 00:07:44.921 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.921 rmmod nvme_tcp 00:07:44.921 rmmod nvme_fabrics 00:07:44.921 rmmod nvme_keyring 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3861347 ']' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3861347 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3861347 ']' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3861347 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3861347 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3861347' 00:07:44.921 killing process with pid 3861347 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3861347 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3861347 00:07:44.921 nvmf threads initialize successfully 00:07:44.921 bdev subsystem init successfully 00:07:44.921 created a nvmf target service 00:07:44.921 create targets's poll groups done 00:07:44.921 all subsystems of target started 00:07:44.921 nvmf target is running 00:07:44.921 all subsystems of target stopped 00:07:44.921 destroy targets's poll groups done 00:07:44.921 destroyed the nvmf target service 00:07:44.921 bdev subsystem finish successfully 00:07:44.921 nvmf threads destroy successfully 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.921 19:37:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.181 19:37:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.442 19:37:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:45.442 19:37:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.442 19:37:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.442 00:07:45.442 real 0m15.200s 00:07:45.442 user 0m42.186s 00:07:45.442 sys 0m3.254s 00:07:45.442 19:37:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.442 19:37:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.443 ************************************ 00:07:45.443 END TEST nvmf_example 00:07:45.443 ************************************ 00:07:45.443 19:37:54 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:45.443 19:37:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:45.443 19:37:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.443 19:37:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.443 ************************************ 00:07:45.443 START TEST nvmf_filesystem 00:07:45.443 ************************************ 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:45.443 * Looking for test storage... 00:07:45.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:45.443 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:45.444 #define SPDK_CONFIG_H 00:07:45.444 #define SPDK_CONFIG_APPS 1 00:07:45.444 #define SPDK_CONFIG_ARCH native 00:07:45.444 #undef SPDK_CONFIG_ASAN 00:07:45.444 #undef SPDK_CONFIG_AVAHI 00:07:45.444 #undef SPDK_CONFIG_CET 00:07:45.444 #define SPDK_CONFIG_COVERAGE 1 00:07:45.444 #define SPDK_CONFIG_CROSS_PREFIX 00:07:45.444 #undef SPDK_CONFIG_CRYPTO 00:07:45.444 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:45.444 #undef SPDK_CONFIG_CUSTOMOCF 00:07:45.444 #undef SPDK_CONFIG_DAOS 00:07:45.444 #define SPDK_CONFIG_DAOS_DIR 00:07:45.444 #define SPDK_CONFIG_DEBUG 1 00:07:45.444 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:45.444 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:45.444 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:45.444 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.444 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:45.444 #undef SPDK_CONFIG_DPDK_UADK 00:07:45.444 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:45.444 #define SPDK_CONFIG_EXAMPLES 1 00:07:45.444 #undef SPDK_CONFIG_FC 00:07:45.444 #define SPDK_CONFIG_FC_PATH 00:07:45.444 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:45.444 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:45.444 #undef SPDK_CONFIG_FUSE 00:07:45.444 #undef SPDK_CONFIG_FUZZER 00:07:45.444 #define SPDK_CONFIG_FUZZER_LIB 00:07:45.444 #undef SPDK_CONFIG_GOLANG 00:07:45.444 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:45.444 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:45.444 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:45.444 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:45.444 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:45.444 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:45.444 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:45.444 #define SPDK_CONFIG_IDXD 1 00:07:45.444 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:45.444 #undef SPDK_CONFIG_IPSEC_MB 00:07:45.444 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:45.444 #define SPDK_CONFIG_ISAL 1 00:07:45.444 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:45.444 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:45.444 #define SPDK_CONFIG_LIBDIR 00:07:45.444 #undef SPDK_CONFIG_LTO 00:07:45.444 #define SPDK_CONFIG_MAX_LCORES 00:07:45.444 #define SPDK_CONFIG_NVME_CUSE 1 00:07:45.444 #undef SPDK_CONFIG_OCF 00:07:45.444 #define SPDK_CONFIG_OCF_PATH 00:07:45.444 #define SPDK_CONFIG_OPENSSL_PATH 00:07:45.444 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:45.444 #define SPDK_CONFIG_PGO_DIR 00:07:45.444 #undef SPDK_CONFIG_PGO_USE 00:07:45.444 #define SPDK_CONFIG_PREFIX /usr/local 00:07:45.444 #undef SPDK_CONFIG_RAID5F 00:07:45.444 #undef SPDK_CONFIG_RBD 00:07:45.444 #define SPDK_CONFIG_RDMA 1 00:07:45.444 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:45.444 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:45.444 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:45.444 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:45.444 #define SPDK_CONFIG_SHARED 1 00:07:45.444 #undef SPDK_CONFIG_SMA 00:07:45.444 #define SPDK_CONFIG_TESTS 1 00:07:45.444 #undef SPDK_CONFIG_TSAN 00:07:45.444 #define SPDK_CONFIG_UBLK 1 00:07:45.444 #define SPDK_CONFIG_UBSAN 1 00:07:45.444 #undef SPDK_CONFIG_UNIT_TESTS 00:07:45.444 #undef SPDK_CONFIG_URING 00:07:45.444 #define SPDK_CONFIG_URING_PATH 00:07:45.444 #undef SPDK_CONFIG_URING_ZNS 00:07:45.444 #undef SPDK_CONFIG_USDT 00:07:45.444 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:45.444 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:45.444 #define SPDK_CONFIG_VFIO_USER 1 00:07:45.444 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:45.444 #define SPDK_CONFIG_VHOST 1 00:07:45.444 #define SPDK_CONFIG_VIRTIO 1 00:07:45.444 #undef SPDK_CONFIG_VTUNE 00:07:45.444 #define SPDK_CONFIG_VTUNE_DIR 00:07:45.444 #define SPDK_CONFIG_WERROR 1 00:07:45.444 #define SPDK_CONFIG_WPDK_DIR 00:07:45.444 #undef SPDK_CONFIG_XNVME 00:07:45.444 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:45.444 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:45.445 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3862930 ]] 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3862930 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.7tU416 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:45.446 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7tU416/tests/target /tmp/spdk.7tU416 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52938289152 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994713088 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9056423936 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993981440 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996865024 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=491520 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:45.447 * Looking for test storage... 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52938289152 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11271016448 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.447 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.449 19:37:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.356 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:47.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:47.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:47.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:47.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.357 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.617 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.617 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.617 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.617 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:47.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:07:47.618 00:07:47.618 --- 10.0.0.2 ping statistics --- 00:07:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.618 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:07:47.618 00:07:47.618 --- 10.0.0.1 ping statistics --- 00:07:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.618 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.618 ************************************ 00:07:47.618 START TEST nvmf_filesystem_no_in_capsule 00:07:47.618 ************************************ 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3864556 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3864556 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3864556 ']' 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.618 19:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.618 [2024-07-25 19:37:56.959318] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:47.618 [2024-07-25 19:37:56.959430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.618 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.618 [2024-07-25 19:37:57.027280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.877 [2024-07-25 19:37:57.119239] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.877 [2024-07-25 19:37:57.119288] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.877 [2024-07-25 19:37:57.119318] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.877 [2024-07-25 19:37:57.119330] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.877 [2024-07-25 19:37:57.119340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.877 [2024-07-25 19:37:57.119404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.877 [2024-07-25 19:37:57.119432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.877 [2024-07-25 19:37:57.119494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.877 [2024-07-25 19:37:57.119500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.877 [2024-07-25 19:37:57.257619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.877 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 Malloc1 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 [2024-07-25 19:37:57.438521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:48.135 { 00:07:48.135 "name": "Malloc1", 00:07:48.135 "aliases": [ 00:07:48.135 "040e12a7-646d-43b7-bb28-4f8c21bc8c8a" 00:07:48.135 ], 00:07:48.135 "product_name": "Malloc disk", 00:07:48.135 "block_size": 512, 00:07:48.135 "num_blocks": 1048576, 00:07:48.135 "uuid": "040e12a7-646d-43b7-bb28-4f8c21bc8c8a", 00:07:48.135 "assigned_rate_limits": { 00:07:48.135 "rw_ios_per_sec": 0, 00:07:48.135 "rw_mbytes_per_sec": 0, 00:07:48.135 "r_mbytes_per_sec": 0, 00:07:48.135 "w_mbytes_per_sec": 0 00:07:48.135 }, 00:07:48.135 "claimed": true, 00:07:48.135 "claim_type": "exclusive_write", 00:07:48.135 "zoned": false, 00:07:48.135 "supported_io_types": { 00:07:48.135 "read": true, 00:07:48.135 "write": true, 00:07:48.135 "unmap": true, 00:07:48.135 "write_zeroes": true, 00:07:48.135 "flush": true, 00:07:48.135 "reset": true, 00:07:48.135 "compare": false, 00:07:48.135 "compare_and_write": false, 00:07:48.135 "abort": true, 00:07:48.135 "nvme_admin": false, 00:07:48.135 "nvme_io": false 00:07:48.135 }, 00:07:48.135 "memory_domains": [ 00:07:48.135 { 00:07:48.135 "dma_device_id": "system", 00:07:48.135 "dma_device_type": 1 00:07:48.135 }, 00:07:48.135 { 00:07:48.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.135 "dma_device_type": 2 00:07:48.135 } 00:07:48.135 ], 00:07:48.135 "driver_specific": {} 00:07:48.135 } 00:07:48.135 ]' 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:48.135 19:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.702 19:37:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:48.702 19:37:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:48.702 19:37:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:48.702 19:37:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:48.702 19:37:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:51.270 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:51.529 19:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:52.466 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:52.466 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:52.466 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:52.466 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.466 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.466 ************************************ 00:07:52.466 START TEST filesystem_ext4 00:07:52.466 ************************************ 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:52.467 19:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:52.467 mke2fs 1.46.5 (30-Dec-2021) 00:07:52.726 Discarding device blocks: 0/522240 done 00:07:52.726 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:52.726 Filesystem UUID: 0ca5e234-9188-422a-a533-0578899ea920 00:07:52.726 Superblock backups stored on blocks: 00:07:52.726 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:52.726 00:07:52.726 Allocating group tables: 0/64 done 00:07:52.726 Writing inode tables: 0/64 done 00:07:54.627 Creating journal (8192 blocks): done 00:07:54.627 Writing superblocks and filesystem accounting information: 0/64 done 00:07:54.627 00:07:54.627 19:38:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:54.627 19:38:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3864556 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.888 00:07:54.888 real 0m2.342s 00:07:54.888 user 0m0.019s 00:07:54.888 sys 0m0.046s 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:54.888 ************************************ 00:07:54.888 END TEST filesystem_ext4 00:07:54.888 ************************************ 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.888 ************************************ 00:07:54.888 START TEST filesystem_btrfs 00:07:54.888 ************************************ 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:54.888 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:55.148 btrfs-progs v6.6.2 00:07:55.148 See https://btrfs.readthedocs.io for more information. 00:07:55.148 00:07:55.148 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:55.148 NOTE: several default settings have changed in version 5.15, please make sure 00:07:55.148 this does not affect your deployments: 00:07:55.148 - DUP for metadata (-m dup) 00:07:55.148 - enabled no-holes (-O no-holes) 00:07:55.148 - enabled free-space-tree (-R free-space-tree) 00:07:55.148 00:07:55.148 Label: (null) 00:07:55.148 UUID: 89ed633b-2ae8-4277-beda-c6b54f3ad865 00:07:55.148 Node size: 16384 00:07:55.148 Sector size: 4096 00:07:55.148 Filesystem size: 510.00MiB 00:07:55.148 Block group profiles: 00:07:55.148 Data: single 8.00MiB 00:07:55.148 Metadata: DUP 32.00MiB 00:07:55.148 System: DUP 8.00MiB 00:07:55.148 SSD detected: yes 00:07:55.148 Zoned device: no 00:07:55.148 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:55.148 Runtime features: free-space-tree 00:07:55.148 Checksum: crc32c 00:07:55.148 Number of devices: 1 00:07:55.148 Devices: 00:07:55.148 ID SIZE PATH 00:07:55.148 1 510.00MiB /dev/nvme0n1p1 00:07:55.148 00:07:55.148 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:55.148 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3864556 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.406 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.406 00:07:55.406 real 0m0.608s 00:07:55.407 user 0m0.014s 00:07:55.407 sys 0m0.115s 00:07:55.407 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.407 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:55.407 ************************************ 00:07:55.407 END TEST filesystem_btrfs 00:07:55.407 ************************************ 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.666 ************************************ 00:07:55.666 START TEST filesystem_xfs 00:07:55.666 ************************************ 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:55.666 19:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:55.666 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:55.666 = sectsz=512 attr=2, projid32bit=1 00:07:55.666 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:55.666 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:55.666 data = bsize=4096 blocks=130560, imaxpct=25 00:07:55.666 = sunit=0 swidth=0 blks 00:07:55.666 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:55.666 log =internal log bsize=4096 blocks=16384, version=2 00:07:55.666 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:55.666 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:56.603 Discarding blocks...Done. 00:07:56.603 19:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:56.603 19:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3864556 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.138 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.138 00:07:59.138 real 0m3.685s 00:07:59.138 user 0m0.019s 00:07:59.138 sys 0m0.060s 00:07:59.139 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.139 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.139 ************************************ 00:07:59.139 END TEST filesystem_xfs 00:07:59.139 ************************************ 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.396 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3864556 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3864556 ']' 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3864556 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3864556 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3864556' 00:07:59.397 killing process with pid 3864556 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3864556 00:07:59.397 19:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3864556 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:59.966 00:07:59.966 real 0m12.238s 00:07:59.966 user 0m46.972s 00:07:59.966 sys 0m1.796s 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 ************************************ 00:07:59.966 END TEST nvmf_filesystem_no_in_capsule 00:07:59.966 ************************************ 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 ************************************ 00:07:59.966 START TEST nvmf_filesystem_in_capsule 00:07:59.966 ************************************ 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3866243 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3866243 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3866243 ']' 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.966 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 [2024-07-25 19:38:09.247023] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:59.966 [2024-07-25 19:38:09.247153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.966 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.966 [2024-07-25 19:38:09.321221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.226 [2024-07-25 19:38:09.415178] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.226 [2024-07-25 19:38:09.415246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.226 [2024-07-25 19:38:09.415263] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.226 [2024-07-25 19:38:09.415276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.226 [2024-07-25 19:38:09.415288] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.226 [2024-07-25 19:38:09.415350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.226 [2024-07-25 19:38:09.415405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.226 [2024-07-25 19:38:09.415443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.226 [2024-07-25 19:38:09.415446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.226 [2024-07-25 19:38:09.574893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.226 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:00.227 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.227 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.485 Malloc1 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.485 [2024-07-25 19:38:09.759319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.485 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:00.485 { 00:08:00.485 "name": "Malloc1", 00:08:00.485 "aliases": [ 00:08:00.485 "cc1644f1-9252-4e6d-a1f3-8c6f5ceae869" 00:08:00.485 ], 00:08:00.485 "product_name": "Malloc disk", 00:08:00.485 "block_size": 512, 00:08:00.485 "num_blocks": 1048576, 00:08:00.485 "uuid": "cc1644f1-9252-4e6d-a1f3-8c6f5ceae869", 00:08:00.485 "assigned_rate_limits": { 00:08:00.485 "rw_ios_per_sec": 0, 00:08:00.485 "rw_mbytes_per_sec": 0, 00:08:00.485 "r_mbytes_per_sec": 0, 00:08:00.485 "w_mbytes_per_sec": 0 00:08:00.486 }, 00:08:00.486 "claimed": true, 00:08:00.486 "claim_type": "exclusive_write", 00:08:00.486 "zoned": false, 00:08:00.486 "supported_io_types": { 00:08:00.486 "read": true, 00:08:00.486 "write": true, 00:08:00.486 "unmap": true, 00:08:00.486 "write_zeroes": true, 00:08:00.486 "flush": true, 00:08:00.486 "reset": true, 00:08:00.486 "compare": false, 00:08:00.486 "compare_and_write": false, 00:08:00.486 "abort": true, 00:08:00.486 "nvme_admin": false, 00:08:00.486 "nvme_io": false 00:08:00.486 }, 00:08:00.486 "memory_domains": [ 00:08:00.486 { 00:08:00.486 "dma_device_id": "system", 00:08:00.486 "dma_device_type": 1 00:08:00.486 }, 00:08:00.486 { 00:08:00.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.486 "dma_device_type": 2 00:08:00.486 } 00:08:00.486 ], 00:08:00.486 "driver_specific": {} 00:08:00.486 } 00:08:00.486 ]' 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:00.486 19:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:01.054 19:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:01.054 19:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:01.054 19:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:01.054 19:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:01.054 19:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:03.592 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:03.593 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:03.593 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:03.593 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:03.593 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:03.593 19:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:03.851 19:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:04.784 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:04.784 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:04.784 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:04.784 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.784 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.042 ************************************ 00:08:05.042 START TEST filesystem_in_capsule_ext4 00:08:05.042 ************************************ 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:05.042 19:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:05.042 mke2fs 1.46.5 (30-Dec-2021) 00:08:05.042 Discarding device blocks: 0/522240 done 00:08:05.042 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:05.042 Filesystem UUID: d83cda9d-3a25-46c5-9503-459028d9e39e 00:08:05.042 Superblock backups stored on blocks: 00:08:05.042 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:05.042 00:08:05.042 Allocating group tables: 0/64 done 00:08:05.042 Writing inode tables: 0/64 done 00:08:05.042 Creating journal (8192 blocks): done 00:08:06.128 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:06.128 00:08:06.128 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:06.128 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.128 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3866243 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.386 00:08:06.386 real 0m1.413s 00:08:06.386 user 0m0.022s 00:08:06.386 sys 0m0.049s 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:06.386 ************************************ 00:08:06.386 END TEST filesystem_in_capsule_ext4 00:08:06.386 ************************************ 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.386 ************************************ 00:08:06.386 START TEST filesystem_in_capsule_btrfs 00:08:06.386 ************************************ 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:06.386 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:06.645 btrfs-progs v6.6.2 00:08:06.645 See https://btrfs.readthedocs.io for more information. 00:08:06.645 00:08:06.645 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:06.645 NOTE: several default settings have changed in version 5.15, please make sure 00:08:06.645 this does not affect your deployments: 00:08:06.645 - DUP for metadata (-m dup) 00:08:06.645 - enabled no-holes (-O no-holes) 00:08:06.645 - enabled free-space-tree (-R free-space-tree) 00:08:06.645 00:08:06.645 Label: (null) 00:08:06.645 UUID: 960a7e88-617b-415c-89ce-7c83ece7a2d3 00:08:06.645 Node size: 16384 00:08:06.645 Sector size: 4096 00:08:06.645 Filesystem size: 510.00MiB 00:08:06.645 Block group profiles: 00:08:06.645 Data: single 8.00MiB 00:08:06.645 Metadata: DUP 32.00MiB 00:08:06.645 System: DUP 8.00MiB 00:08:06.645 SSD detected: yes 00:08:06.645 Zoned device: no 00:08:06.645 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:06.645 Runtime features: free-space-tree 00:08:06.645 Checksum: crc32c 00:08:06.645 Number of devices: 1 00:08:06.645 Devices: 00:08:06.645 ID SIZE PATH 00:08:06.645 1 510.00MiB /dev/nvme0n1p1 00:08:06.645 00:08:06.645 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:06.645 19:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3866243 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.578 00:08:07.578 real 0m1.111s 00:08:07.578 user 0m0.012s 00:08:07.578 sys 0m0.115s 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:07.578 ************************************ 00:08:07.578 END TEST filesystem_in_capsule_btrfs 00:08:07.578 ************************************ 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:07.578 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.579 ************************************ 00:08:07.579 START TEST filesystem_in_capsule_xfs 00:08:07.579 ************************************ 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:07.579 19:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:07.579 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:07.579 = sectsz=512 attr=2, projid32bit=1 00:08:07.579 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:07.579 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:07.579 data = bsize=4096 blocks=130560, imaxpct=25 00:08:07.579 = sunit=0 swidth=0 blks 00:08:07.579 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:07.579 log =internal log bsize=4096 blocks=16384, version=2 00:08:07.579 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:07.579 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:08.517 Discarding blocks...Done. 00:08:08.517 19:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:08.518 19:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.051 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.051 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:11.051 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.051 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3866243 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.052 00:08:11.052 real 0m3.421s 00:08:11.052 user 0m0.015s 00:08:11.052 sys 0m0.061s 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 ************************************ 00:08:11.052 END TEST filesystem_in_capsule_xfs 00:08:11.052 ************************************ 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:11.052 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:11.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3866243 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3866243 ']' 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3866243 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3866243 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3866243' 00:08:11.311 killing process with pid 3866243 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3866243 00:08:11.311 19:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3866243 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:11.881 00:08:11.881 real 0m11.807s 00:08:11.881 user 0m45.284s 00:08:11.881 sys 0m1.820s 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.881 ************************************ 00:08:11.881 END TEST nvmf_filesystem_in_capsule 00:08:11.881 ************************************ 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.881 rmmod nvme_tcp 00:08:11.881 rmmod nvme_fabrics 00:08:11.881 rmmod nvme_keyring 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:11.881 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.882 19:38:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.812 19:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.812 00:08:13.812 real 0m28.450s 00:08:13.812 user 1m33.127s 00:08:13.812 sys 0m5.150s 00:08:13.812 19:38:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.812 19:38:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.812 ************************************ 00:08:13.812 END TEST nvmf_filesystem 00:08:13.812 ************************************ 00:08:13.812 19:38:23 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:13.812 19:38:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.812 19:38:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.812 19:38:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.812 ************************************ 00:08:13.812 START TEST nvmf_target_discovery 00:08:13.812 ************************************ 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:13.812 * Looking for test storage... 00:08:13.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.812 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.071 19:38:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.973 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.974 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:08:16.234 00:08:16.234 --- 10.0.0.2 ping statistics --- 00:08:16.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.234 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:08:16.234 00:08:16.234 --- 10.0.0.1 ping statistics --- 00:08:16.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.234 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3869730 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3869730 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3869730 ']' 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:16.234 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.234 [2024-07-25 19:38:25.529669] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:16.234 [2024-07-25 19:38:25.529749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.234 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.235 [2024-07-25 19:38:25.593276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.494 [2024-07-25 19:38:25.685454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.494 [2024-07-25 19:38:25.685514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.494 [2024-07-25 19:38:25.685528] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.494 [2024-07-25 19:38:25.685539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.494 [2024-07-25 19:38:25.685549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.494 [2024-07-25 19:38:25.685683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.494 [2024-07-25 19:38:25.685748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.494 [2024-07-25 19:38:25.685814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.494 [2024-07-25 19:38:25.685817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.494 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:16.494 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 [2024-07-25 19:38:25.844723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 Null1 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 [2024-07-25 19:38:25.885035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 Null2 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.495 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 Null3 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 Null4 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.755 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:16.755 00:08:16.755 Discovery Log Number of Records 6, Generation counter 6 00:08:16.755 =====Discovery Log Entry 0====== 00:08:16.755 trtype: tcp 00:08:16.755 adrfam: ipv4 00:08:16.755 subtype: current discovery subsystem 00:08:16.755 treq: not required 00:08:16.755 portid: 0 00:08:16.755 trsvcid: 4420 00:08:16.755 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:16.755 traddr: 10.0.0.2 00:08:16.755 eflags: explicit discovery connections, duplicate discovery information 00:08:16.755 sectype: none 00:08:16.755 =====Discovery Log Entry 1====== 00:08:16.755 trtype: tcp 00:08:16.755 adrfam: ipv4 00:08:16.755 subtype: nvme subsystem 00:08:16.755 treq: not required 00:08:16.755 portid: 0 00:08:16.755 trsvcid: 4420 00:08:16.755 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:16.755 traddr: 10.0.0.2 00:08:16.755 eflags: none 00:08:16.755 sectype: none 00:08:16.755 =====Discovery Log Entry 2====== 00:08:16.755 trtype: tcp 00:08:16.755 adrfam: ipv4 00:08:16.755 subtype: nvme subsystem 00:08:16.755 treq: not required 00:08:16.755 portid: 0 00:08:16.755 trsvcid: 4420 00:08:16.755 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:16.755 traddr: 10.0.0.2 00:08:16.755 eflags: none 00:08:16.755 sectype: none 00:08:16.755 =====Discovery Log Entry 3====== 00:08:16.755 trtype: tcp 00:08:16.755 adrfam: ipv4 00:08:16.755 subtype: nvme subsystem 00:08:16.755 treq: not required 00:08:16.755 portid: 0 00:08:16.755 trsvcid: 4420 00:08:16.755 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:16.755 traddr: 10.0.0.2 00:08:16.755 eflags: none 00:08:16.755 sectype: none 00:08:16.755 =====Discovery Log Entry 4====== 00:08:16.755 trtype: tcp 00:08:16.755 adrfam: ipv4 00:08:16.755 subtype: nvme subsystem 00:08:16.755 treq: not required 00:08:16.755 portid: 0 00:08:16.755 trsvcid: 4420 00:08:16.755 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:16.755 traddr: 10.0.0.2 00:08:16.755 eflags: none 00:08:16.755 sectype: none 00:08:16.755 =====Discovery Log Entry 5====== 00:08:16.755 trtype: tcp 00:08:16.755 adrfam: ipv4 00:08:16.755 subtype: discovery subsystem referral 00:08:16.755 treq: not required 00:08:16.755 portid: 0 00:08:16.755 trsvcid: 4430 00:08:16.755 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:16.755 traddr: 10.0.0.2 00:08:16.755 eflags: none 00:08:16.755 sectype: none 00:08:16.755 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:16.755 Perform nvmf subsystem discovery via RPC 00:08:16.755 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:16.755 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.755 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 [ 00:08:16.755 { 00:08:16.755 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:16.755 "subtype": "Discovery", 00:08:16.755 "listen_addresses": [ 00:08:16.755 { 00:08:16.755 "trtype": "TCP", 00:08:16.755 "adrfam": "IPv4", 00:08:16.755 "traddr": "10.0.0.2", 00:08:16.755 "trsvcid": "4420" 00:08:16.755 } 00:08:16.755 ], 00:08:16.755 "allow_any_host": true, 00:08:16.755 "hosts": [] 00:08:16.755 }, 00:08:16.755 { 00:08:16.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.755 "subtype": "NVMe", 00:08:16.755 "listen_addresses": [ 00:08:16.755 { 00:08:16.755 "trtype": "TCP", 00:08:16.755 "adrfam": "IPv4", 00:08:16.755 "traddr": "10.0.0.2", 00:08:16.755 "trsvcid": "4420" 00:08:16.755 } 00:08:16.755 ], 00:08:16.755 "allow_any_host": true, 00:08:16.755 "hosts": [], 00:08:16.755 "serial_number": "SPDK00000000000001", 00:08:16.755 "model_number": "SPDK bdev Controller", 00:08:16.755 "max_namespaces": 32, 00:08:16.755 "min_cntlid": 1, 00:08:16.755 "max_cntlid": 65519, 00:08:16.755 "namespaces": [ 00:08:16.755 { 00:08:16.755 "nsid": 1, 00:08:16.755 "bdev_name": "Null1", 00:08:16.755 "name": "Null1", 00:08:16.755 "nguid": "5291CC57D74F4AE3BCE57D306529BDD0", 00:08:16.755 "uuid": "5291cc57-d74f-4ae3-bce5-7d306529bdd0" 00:08:16.755 } 00:08:16.755 ] 00:08:16.755 }, 00:08:16.755 { 00:08:16.755 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:16.755 "subtype": "NVMe", 00:08:16.755 "listen_addresses": [ 00:08:16.755 { 00:08:16.755 "trtype": "TCP", 00:08:16.755 "adrfam": "IPv4", 00:08:16.755 "traddr": "10.0.0.2", 00:08:16.755 "trsvcid": "4420" 00:08:16.755 } 00:08:16.755 ], 00:08:16.755 "allow_any_host": true, 00:08:16.755 "hosts": [], 00:08:16.755 "serial_number": "SPDK00000000000002", 00:08:16.756 "model_number": "SPDK bdev Controller", 00:08:16.756 "max_namespaces": 32, 00:08:16.756 "min_cntlid": 1, 00:08:16.756 "max_cntlid": 65519, 00:08:16.756 "namespaces": [ 00:08:16.756 { 00:08:16.756 "nsid": 1, 00:08:16.756 "bdev_name": "Null2", 00:08:16.756 "name": "Null2", 00:08:16.756 "nguid": "1BFE9F23A737484595ED4592B3CDFCE3", 00:08:16.756 "uuid": "1bfe9f23-a737-4845-95ed-4592b3cdfce3" 00:08:16.756 } 00:08:16.756 ] 00:08:16.756 }, 00:08:16.756 { 00:08:16.756 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:16.756 "subtype": "NVMe", 00:08:16.756 "listen_addresses": [ 00:08:16.756 { 00:08:16.756 "trtype": "TCP", 00:08:16.756 "adrfam": "IPv4", 00:08:16.756 "traddr": "10.0.0.2", 00:08:16.756 "trsvcid": "4420" 00:08:16.756 } 00:08:16.756 ], 00:08:16.756 "allow_any_host": true, 00:08:16.756 "hosts": [], 00:08:16.756 "serial_number": "SPDK00000000000003", 00:08:16.756 "model_number": "SPDK bdev Controller", 00:08:16.756 "max_namespaces": 32, 00:08:16.756 "min_cntlid": 1, 00:08:16.756 "max_cntlid": 65519, 00:08:16.756 "namespaces": [ 00:08:16.756 { 00:08:16.756 "nsid": 1, 00:08:16.756 "bdev_name": "Null3", 00:08:16.756 "name": "Null3", 00:08:16.756 "nguid": "99E101BE1C9348DCB05E6F6FAAA4D72F", 00:08:16.756 "uuid": "99e101be-1c93-48dc-b05e-6f6faaa4d72f" 00:08:16.756 } 00:08:16.756 ] 00:08:16.756 }, 00:08:16.756 { 00:08:16.756 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:16.756 "subtype": "NVMe", 00:08:16.756 "listen_addresses": [ 00:08:16.756 { 00:08:16.756 "trtype": "TCP", 00:08:16.756 "adrfam": "IPv4", 00:08:16.756 "traddr": "10.0.0.2", 00:08:16.756 "trsvcid": "4420" 00:08:16.756 } 00:08:16.756 ], 00:08:16.756 "allow_any_host": true, 00:08:16.756 "hosts": [], 00:08:16.756 "serial_number": "SPDK00000000000004", 00:08:16.756 "model_number": "SPDK bdev Controller", 00:08:16.756 "max_namespaces": 32, 00:08:16.756 "min_cntlid": 1, 00:08:16.756 "max_cntlid": 65519, 00:08:16.756 "namespaces": [ 00:08:16.756 { 00:08:16.756 "nsid": 1, 00:08:16.756 "bdev_name": "Null4", 00:08:16.756 "name": "Null4", 00:08:16.756 "nguid": "C7811BC66B5B4665AB08531AD39C9801", 00:08:16.756 "uuid": "c7811bc6-6b5b-4665-ab08-531ad39c9801" 00:08:16.756 } 00:08:16.756 ] 00:08:16.756 } 00:08:16.756 ] 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.756 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.015 rmmod nvme_tcp 00:08:17.015 rmmod nvme_fabrics 00:08:17.015 rmmod nvme_keyring 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3869730 ']' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3869730 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3869730 ']' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3869730 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3869730 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3869730' 00:08:17.015 killing process with pid 3869730 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3869730 00:08:17.015 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3869730 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.274 19:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.178 19:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.437 00:08:19.437 real 0m5.428s 00:08:19.437 user 0m4.305s 00:08:19.437 sys 0m1.849s 00:08:19.437 19:38:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.437 19:38:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.437 ************************************ 00:08:19.437 END TEST nvmf_target_discovery 00:08:19.437 ************************************ 00:08:19.437 19:38:28 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:19.437 19:38:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:19.437 19:38:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.437 19:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.437 ************************************ 00:08:19.437 START TEST nvmf_referrals 00:08:19.437 ************************************ 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:19.437 * Looking for test storage... 00:08:19.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.437 19:38:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.344 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:08:21.345 00:08:21.345 --- 10.0.0.2 ping statistics --- 00:08:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.345 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:21.345 00:08:21.345 --- 10.0.0.1 ping statistics --- 00:08:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.345 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3871811 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3871811 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3871811 ']' 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:21.345 19:38:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.605 [2024-07-25 19:38:30.781731] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:21.605 [2024-07-25 19:38:30.781808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.605 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.605 [2024-07-25 19:38:30.852391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.605 [2024-07-25 19:38:30.943137] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.605 [2024-07-25 19:38:30.943199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.605 [2024-07-25 19:38:30.943224] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.605 [2024-07-25 19:38:30.943238] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.605 [2024-07-25 19:38:30.943250] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.605 [2024-07-25 19:38:30.943341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.605 [2024-07-25 19:38:30.943410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.605 [2024-07-25 19:38:30.943507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.605 [2024-07-25 19:38:30.943510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 [2024-07-25 19:38:31.095905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 [2024-07-25 19:38:31.108168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.866 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.125 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.383 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.641 19:38:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.641 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.899 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.158 rmmod nvme_tcp 00:08:23.158 rmmod nvme_fabrics 00:08:23.158 rmmod nvme_keyring 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3871811 ']' 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3871811 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3871811 ']' 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3871811 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3871811 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3871811' 00:08:23.158 killing process with pid 3871811 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3871811 00:08:23.158 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3871811 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.416 19:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.949 19:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.949 00:08:25.949 real 0m6.176s 00:08:25.949 user 0m8.558s 00:08:25.949 sys 0m2.055s 00:08:25.949 19:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.949 19:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.949 ************************************ 00:08:25.949 END TEST nvmf_referrals 00:08:25.949 ************************************ 00:08:25.949 19:38:34 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:25.949 19:38:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:25.949 19:38:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.949 19:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.949 ************************************ 00:08:25.949 START TEST nvmf_connect_disconnect 00:08:25.949 ************************************ 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:25.949 * Looking for test storage... 00:08:25.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.949 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.950 19:38:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.731 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:27.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:27.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:27.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:27.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.732 19:38:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:27.732 00:08:27.732 --- 10.0.0.2 ping statistics --- 00:08:27.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.732 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:27.732 00:08:27.732 --- 10.0.0.1 ping statistics --- 00:08:27.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.732 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3873989 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3873989 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3873989 ']' 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:27.732 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.992 [2024-07-25 19:38:37.167200] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:27.992 [2024-07-25 19:38:37.167287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.992 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.992 [2024-07-25 19:38:37.245255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.992 [2024-07-25 19:38:37.340618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.992 [2024-07-25 19:38:37.340674] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.992 [2024-07-25 19:38:37.340701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.992 [2024-07-25 19:38:37.340716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.992 [2024-07-25 19:38:37.340728] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.992 [2024-07-25 19:38:37.340789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.992 [2024-07-25 19:38:37.340844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.992 [2024-07-25 19:38:37.340899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.992 [2024-07-25 19:38:37.340903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.254 [2024-07-25 19:38:37.505096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.254 [2024-07-25 19:38:37.558485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:28.254 19:38:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:30.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.277 rmmod nvme_tcp 00:12:19.277 rmmod nvme_fabrics 00:12:19.277 rmmod nvme_keyring 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3873989 ']' 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3873989 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3873989 ']' 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3873989 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3873989 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3873989' 00:12:19.277 killing process with pid 3873989 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3873989 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3873989 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.277 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.278 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.278 19:42:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.185 19:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.185 00:12:21.185 real 3m55.578s 00:12:21.185 user 14m57.759s 00:12:21.185 sys 0m33.986s 00:12:21.185 19:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.185 19:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.185 ************************************ 00:12:21.185 END TEST nvmf_connect_disconnect 00:12:21.185 ************************************ 00:12:21.185 19:42:30 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:21.185 19:42:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:21.185 19:42:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.185 19:42:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.185 ************************************ 00:12:21.185 START TEST nvmf_multitarget 00:12:21.185 ************************************ 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:21.185 * Looking for test storage... 00:12:21.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.185 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.186 19:42:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.186 19:42:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.186 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.186 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.186 19:42:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.186 19:42:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:23.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:23.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:23.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:23.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.090 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:23.351 00:12:23.351 --- 10.0.0.2 ping statistics --- 00:12:23.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.351 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:12:23.351 00:12:23.351 --- 10.0.0.1 ping statistics --- 00:12:23.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.351 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3905691 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3905691 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3905691 ']' 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:23.351 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.351 [2024-07-25 19:42:32.696915] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:12:23.351 [2024-07-25 19:42:32.696984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.351 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.351 [2024-07-25 19:42:32.766461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.610 [2024-07-25 19:42:32.864191] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.610 [2024-07-25 19:42:32.864260] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.610 [2024-07-25 19:42:32.864276] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.610 [2024-07-25 19:42:32.864290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.610 [2024-07-25 19:42:32.864302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.610 [2024-07-25 19:42:32.864363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.610 [2024-07-25 19:42:32.864423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.610 [2024-07-25 19:42:32.864453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.610 [2024-07-25 19:42:32.864463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.610 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:23.610 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:23.610 19:42:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.610 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.610 19:42:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.610 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.610 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:23.610 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.610 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:23.867 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:23.867 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:23.867 "nvmf_tgt_1" 00:12:23.867 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:24.130 "nvmf_tgt_2" 00:12:24.130 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:24.130 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:24.130 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:24.130 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:24.130 true 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:24.389 true 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.389 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.389 rmmod nvme_tcp 00:12:24.389 rmmod nvme_fabrics 00:12:24.389 rmmod nvme_keyring 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3905691 ']' 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3905691 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3905691 ']' 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3905691 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3905691 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3905691' 00:12:24.646 killing process with pid 3905691 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3905691 00:12:24.646 19:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3905691 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.904 19:42:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.808 19:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.808 00:12:26.808 real 0m5.622s 00:12:26.808 user 0m6.327s 00:12:26.808 sys 0m1.881s 00:12:26.808 19:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.808 19:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.808 ************************************ 00:12:26.808 END TEST nvmf_multitarget 00:12:26.808 ************************************ 00:12:26.808 19:42:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.808 19:42:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:26.808 19:42:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.808 19:42:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.808 ************************************ 00:12:26.808 START TEST nvmf_rpc 00:12:26.808 ************************************ 00:12:26.808 19:42:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.808 * Looking for test storage... 00:12:27.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.066 19:42:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:27.067 19:42:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.968 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:12:28.968 00:12:28.968 --- 10.0.0.2 ping statistics --- 00:12:28.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.968 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:28.969 00:12:28.969 --- 10.0.0.1 ping statistics --- 00:12:28.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.969 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.969 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3907790 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3907790 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3907790 ']' 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:29.227 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.227 [2024-07-25 19:42:38.451498] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:12:29.227 [2024-07-25 19:42:38.451571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.227 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.227 [2024-07-25 19:42:38.522359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.227 [2024-07-25 19:42:38.618880] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.227 [2024-07-25 19:42:38.618945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.227 [2024-07-25 19:42:38.618962] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.227 [2024-07-25 19:42:38.618976] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.227 [2024-07-25 19:42:38.618989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.227 [2024-07-25 19:42:38.619103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.227 [2024-07-25 19:42:38.619135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.227 [2024-07-25 19:42:38.619190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.227 [2024-07-25 19:42:38.619195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:29.485 "tick_rate": 2700000000, 00:12:29.485 "poll_groups": [ 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_000", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [] 00:12:29.485 }, 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_001", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [] 00:12:29.485 }, 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_002", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [] 00:12:29.485 }, 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_003", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [] 00:12:29.485 } 00:12:29.485 ] 00:12:29.485 }' 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.485 [2024-07-25 19:42:38.873280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.485 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:29.485 "tick_rate": 2700000000, 00:12:29.485 "poll_groups": [ 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_000", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [ 00:12:29.485 { 00:12:29.485 "trtype": "TCP" 00:12:29.485 } 00:12:29.485 ] 00:12:29.485 }, 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_001", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [ 00:12:29.485 { 00:12:29.485 "trtype": "TCP" 00:12:29.485 } 00:12:29.485 ] 00:12:29.485 }, 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_002", 00:12:29.485 "admin_qpairs": 0, 00:12:29.485 "io_qpairs": 0, 00:12:29.485 "current_admin_qpairs": 0, 00:12:29.485 "current_io_qpairs": 0, 00:12:29.485 "pending_bdev_io": 0, 00:12:29.485 "completed_nvme_io": 0, 00:12:29.485 "transports": [ 00:12:29.485 { 00:12:29.485 "trtype": "TCP" 00:12:29.485 } 00:12:29.485 ] 00:12:29.485 }, 00:12:29.485 { 00:12:29.485 "name": "nvmf_tgt_poll_group_003", 00:12:29.486 "admin_qpairs": 0, 00:12:29.486 "io_qpairs": 0, 00:12:29.486 "current_admin_qpairs": 0, 00:12:29.486 "current_io_qpairs": 0, 00:12:29.486 "pending_bdev_io": 0, 00:12:29.486 "completed_nvme_io": 0, 00:12:29.486 "transports": [ 00:12:29.486 { 00:12:29.486 "trtype": "TCP" 00:12:29.486 } 00:12:29.486 ] 00:12:29.486 } 00:12:29.486 ] 00:12:29.486 }' 00:12:29.486 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:29.486 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:29.486 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:29.486 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.744 Malloc1 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.744 19:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.744 [2024-07-25 19:42:39.026444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.744 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:29.745 [2024-07-25 19:42:39.048969] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:29.745 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.745 could not add new controller: failed to write to nvme-fabrics device 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.745 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.310 19:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.310 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:30.310 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.310 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:30.310 19:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:32.841 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.842 [2024-07-25 19:42:41.787366] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:32.842 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.842 could not add new controller: failed to write to nvme-fabrics device 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.842 19:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.101 19:42:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.101 19:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.101 19:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.101 19:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:33.101 19:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:35.005 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:35.005 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:35.005 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.265 [2024-07-25 19:42:44.532385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.265 19:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.837 19:42:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.837 19:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:35.837 19:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.837 19:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:35.837 19:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:37.775 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 [2024-07-25 19:42:47.310912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.034 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.600 19:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.600 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.600 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.600 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.600 19:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:41.134 19:42:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 [2024-07-25 19:42:50.052206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.134 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.393 19:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.393 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:41.393 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.393 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.393 19:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:43.298 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.556 [2024-07-25 19:42:52.831631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.556 19:42:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.120 19:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.120 19:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:44.120 19:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.120 19:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:44.120 19:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:46.647 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:46.647 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:46.647 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.647 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.648 [2024-07-25 19:42:55.646824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.648 19:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.907 19:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.907 19:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:46.907 19:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.907 19:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:46.907 19:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 [2024-07-25 19:42:58.409497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 [2024-07-25 19:42:58.457533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.447 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 [2024-07-25 19:42:58.505698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 [2024-07-25 19:42:58.553861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 [2024-07-25 19:42:58.602016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:49.448 "tick_rate": 2700000000, 00:12:49.448 "poll_groups": [ 00:12:49.448 { 00:12:49.448 "name": "nvmf_tgt_poll_group_000", 00:12:49.448 "admin_qpairs": 2, 00:12:49.448 "io_qpairs": 84, 00:12:49.448 "current_admin_qpairs": 0, 00:12:49.448 "current_io_qpairs": 0, 00:12:49.448 "pending_bdev_io": 0, 00:12:49.448 "completed_nvme_io": 136, 00:12:49.448 "transports": [ 00:12:49.448 { 00:12:49.448 "trtype": "TCP" 00:12:49.448 } 00:12:49.448 ] 00:12:49.448 }, 00:12:49.448 { 00:12:49.448 "name": "nvmf_tgt_poll_group_001", 00:12:49.448 "admin_qpairs": 2, 00:12:49.448 "io_qpairs": 84, 00:12:49.448 "current_admin_qpairs": 0, 00:12:49.448 "current_io_qpairs": 0, 00:12:49.448 "pending_bdev_io": 0, 00:12:49.448 "completed_nvme_io": 214, 00:12:49.448 "transports": [ 00:12:49.448 { 00:12:49.448 "trtype": "TCP" 00:12:49.448 } 00:12:49.448 ] 00:12:49.448 }, 00:12:49.448 { 00:12:49.448 "name": "nvmf_tgt_poll_group_002", 00:12:49.448 "admin_qpairs": 1, 00:12:49.448 "io_qpairs": 84, 00:12:49.448 "current_admin_qpairs": 0, 00:12:49.448 "current_io_qpairs": 0, 00:12:49.448 "pending_bdev_io": 0, 00:12:49.448 "completed_nvme_io": 232, 00:12:49.448 "transports": [ 00:12:49.448 { 00:12:49.448 "trtype": "TCP" 00:12:49.448 } 00:12:49.448 ] 00:12:49.448 }, 00:12:49.448 { 00:12:49.448 "name": "nvmf_tgt_poll_group_003", 00:12:49.448 "admin_qpairs": 2, 00:12:49.448 "io_qpairs": 84, 00:12:49.448 "current_admin_qpairs": 0, 00:12:49.448 "current_io_qpairs": 0, 00:12:49.448 "pending_bdev_io": 0, 00:12:49.448 "completed_nvme_io": 104, 00:12:49.448 "transports": [ 00:12:49.448 { 00:12:49.448 "trtype": "TCP" 00:12:49.448 } 00:12:49.448 ] 00:12:49.448 } 00:12:49.448 ] 00:12:49.448 }' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.448 rmmod nvme_tcp 00:12:49.448 rmmod nvme_fabrics 00:12:49.448 rmmod nvme_keyring 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3907790 ']' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3907790 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3907790 ']' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3907790 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3907790 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3907790' 00:12:49.448 killing process with pid 3907790 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3907790 00:12:49.448 19:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3907790 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.705 19:42:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.243 19:43:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.243 00:12:52.243 real 0m24.916s 00:12:52.243 user 1m20.946s 00:12:52.243 sys 0m4.032s 00:12:52.243 19:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.243 19:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 ************************************ 00:12:52.243 END TEST nvmf_rpc 00:12:52.243 ************************************ 00:12:52.243 19:43:01 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:52.243 19:43:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.243 19:43:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.243 19:43:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 ************************************ 00:12:52.243 START TEST nvmf_invalid 00:12:52.243 ************************************ 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:52.243 * Looking for test storage... 00:12:52.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.243 19:43:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.154 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:54.154 00:12:54.154 --- 10.0.0.2 ping statistics --- 00:12:54.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.154 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:12:54.154 00:12:54.154 --- 10.0.0.1 ping statistics --- 00:12:54.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.154 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3912274 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3912274 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3912274 ']' 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.154 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.154 [2024-07-25 19:43:03.311734] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:12:54.154 [2024-07-25 19:43:03.311812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.154 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.154 [2024-07-25 19:43:03.382119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.154 [2024-07-25 19:43:03.475370] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.154 [2024-07-25 19:43:03.475436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.154 [2024-07-25 19:43:03.475454] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.154 [2024-07-25 19:43:03.475467] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.154 [2024-07-25 19:43:03.475479] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.154 [2024-07-25 19:43:03.475563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.154 [2024-07-25 19:43:03.475621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.154 [2024-07-25 19:43:03.475677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.154 [2024-07-25 19:43:03.475681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:54.411 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2095 00:12:54.668 [2024-07-25 19:43:03.909808] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.668 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:54.668 { 00:12:54.668 "nqn": "nqn.2016-06.io.spdk:cnode2095", 00:12:54.668 "tgt_name": "foobar", 00:12:54.668 "method": "nvmf_create_subsystem", 00:12:54.668 "req_id": 1 00:12:54.668 } 00:12:54.668 Got JSON-RPC error response 00:12:54.668 response: 00:12:54.668 { 00:12:54.668 "code": -32603, 00:12:54.668 "message": "Unable to find target foobar" 00:12:54.668 }' 00:12:54.668 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:54.668 { 00:12:54.668 "nqn": "nqn.2016-06.io.spdk:cnode2095", 00:12:54.668 "tgt_name": "foobar", 00:12:54.668 "method": "nvmf_create_subsystem", 00:12:54.668 "req_id": 1 00:12:54.668 } 00:12:54.668 Got JSON-RPC error response 00:12:54.668 response: 00:12:54.668 { 00:12:54.668 "code": -32603, 00:12:54.668 "message": "Unable to find target foobar" 00:12:54.668 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.668 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.668 19:43:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9512 00:12:54.925 [2024-07-25 19:43:04.178681] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9512: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.925 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:54.925 { 00:12:54.925 "nqn": "nqn.2016-06.io.spdk:cnode9512", 00:12:54.925 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.925 "method": "nvmf_create_subsystem", 00:12:54.925 "req_id": 1 00:12:54.925 } 00:12:54.925 Got JSON-RPC error response 00:12:54.925 response: 00:12:54.925 { 00:12:54.925 "code": -32602, 00:12:54.925 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.925 }' 00:12:54.925 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:54.925 { 00:12:54.925 "nqn": "nqn.2016-06.io.spdk:cnode9512", 00:12:54.925 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.925 "method": "nvmf_create_subsystem", 00:12:54.925 "req_id": 1 00:12:54.925 } 00:12:54.925 Got JSON-RPC error response 00:12:54.925 response: 00:12:54.925 { 00:12:54.925 "code": -32602, 00:12:54.925 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.925 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.925 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.925 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31516 00:12:55.183 [2024-07-25 19:43:04.431524] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31516: invalid model number 'SPDK_Controller' 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:55.183 { 00:12:55.183 "nqn": "nqn.2016-06.io.spdk:cnode31516", 00:12:55.183 "model_number": "SPDK_Controller\u001f", 00:12:55.183 "method": "nvmf_create_subsystem", 00:12:55.183 "req_id": 1 00:12:55.183 } 00:12:55.183 Got JSON-RPC error response 00:12:55.183 response: 00:12:55.183 { 00:12:55.183 "code": -32602, 00:12:55.183 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.183 }' 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:55.183 { 00:12:55.183 "nqn": "nqn.2016-06.io.spdk:cnode31516", 00:12:55.183 "model_number": "SPDK_Controller\u001f", 00:12:55.183 "method": "nvmf_create_subsystem", 00:12:55.183 "req_id": 1 00:12:55.183 } 00:12:55.183 Got JSON-RPC error response 00:12:55.183 response: 00:12:55.183 { 00:12:55.183 "code": -32602, 00:12:55.183 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.183 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.183 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:55.184 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.185 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.185 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:12:55.185 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'o|*)M>EwL'\''ej6X3{z9FC)' 00:12:55.185 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'o|*)M>EwL'\''ej6X3{z9FC)' nqn.2016-06.io.spdk:cnode30849 00:12:55.443 [2024-07-25 19:43:04.728503] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30849: invalid serial number 'o|*)M>EwL'ej6X3{z9FC)' 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:55.443 { 00:12:55.443 "nqn": "nqn.2016-06.io.spdk:cnode30849", 00:12:55.443 "serial_number": "o|*)M>EwL'\''ej6X3{z9FC)", 00:12:55.443 "method": "nvmf_create_subsystem", 00:12:55.443 "req_id": 1 00:12:55.443 } 00:12:55.443 Got JSON-RPC error response 00:12:55.443 response: 00:12:55.443 { 00:12:55.443 "code": -32602, 00:12:55.443 "message": "Invalid SN o|*)M>EwL'\''ej6X3{z9FC)" 00:12:55.443 }' 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:55.443 { 00:12:55.443 "nqn": "nqn.2016-06.io.spdk:cnode30849", 00:12:55.443 "serial_number": "o|*)M>EwL'ej6X3{z9FC)", 00:12:55.443 "method": "nvmf_create_subsystem", 00:12:55.443 "req_id": 1 00:12:55.443 } 00:12:55.443 Got JSON-RPC error response 00:12:55.443 response: 00:12:55.443 { 00:12:55.443 "code": -32602, 00:12:55.443 "message": "Invalid SN o|*)M>EwL'ej6X3{z9FC)" 00:12:55.443 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:55.443 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:55.444 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.445 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ',N$B`i %b'\''T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY' 00:12:55.706 19:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ',N$B`i %b'\''T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY' nqn.2016-06.io.spdk:cnode20407 00:12:55.706 [2024-07-25 19:43:05.117734] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20407: invalid model number ',N$B`i %b'T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY' 00:12:55.965 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:55.965 { 00:12:55.965 "nqn": "nqn.2016-06.io.spdk:cnode20407", 00:12:55.965 "model_number": ",N$B`i %b'\''T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY", 00:12:55.965 "method": "nvmf_create_subsystem", 00:12:55.965 "req_id": 1 00:12:55.965 } 00:12:55.965 Got JSON-RPC error response 00:12:55.965 response: 00:12:55.965 { 00:12:55.965 "code": -32602, 00:12:55.965 "message": "Invalid MN ,N$B`i %b'\''T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY" 00:12:55.965 }' 00:12:55.965 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:55.965 { 00:12:55.965 "nqn": "nqn.2016-06.io.spdk:cnode20407", 00:12:55.965 "model_number": ",N$B`i %b'T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY", 00:12:55.965 "method": "nvmf_create_subsystem", 00:12:55.965 "req_id": 1 00:12:55.965 } 00:12:55.965 Got JSON-RPC error response 00:12:55.965 response: 00:12:55.965 { 00:12:55.965 "code": -32602, 00:12:55.965 "message": "Invalid MN ,N$B`i %b'T`SPSx,oKFVw%,Mv~[#.O+e~1B&l]AY" 00:12:55.965 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.965 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:55.965 [2024-07-25 19:43:05.366651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.965 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:56.222 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:56.222 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:56.222 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:56.222 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:56.222 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:56.480 [2024-07-25 19:43:05.868257] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:56.480 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:56.480 { 00:12:56.480 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.480 "listen_address": { 00:12:56.480 "trtype": "tcp", 00:12:56.480 "traddr": "", 00:12:56.480 "trsvcid": "4421" 00:12:56.480 }, 00:12:56.480 "method": "nvmf_subsystem_remove_listener", 00:12:56.480 "req_id": 1 00:12:56.480 } 00:12:56.480 Got JSON-RPC error response 00:12:56.480 response: 00:12:56.480 { 00:12:56.480 "code": -32602, 00:12:56.480 "message": "Invalid parameters" 00:12:56.480 }' 00:12:56.480 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:56.480 { 00:12:56.480 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.480 "listen_address": { 00:12:56.480 "trtype": "tcp", 00:12:56.480 "traddr": "", 00:12:56.480 "trsvcid": "4421" 00:12:56.480 }, 00:12:56.480 "method": "nvmf_subsystem_remove_listener", 00:12:56.480 "req_id": 1 00:12:56.480 } 00:12:56.480 Got JSON-RPC error response 00:12:56.480 response: 00:12:56.480 { 00:12:56.480 "code": -32602, 00:12:56.480 "message": "Invalid parameters" 00:12:56.480 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:56.480 19:43:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15925 -i 0 00:12:56.766 [2024-07-25 19:43:06.117029] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15925: invalid cntlid range [0-65519] 00:12:56.766 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:56.766 { 00:12:56.766 "nqn": "nqn.2016-06.io.spdk:cnode15925", 00:12:56.766 "min_cntlid": 0, 00:12:56.766 "method": "nvmf_create_subsystem", 00:12:56.766 "req_id": 1 00:12:56.766 } 00:12:56.766 Got JSON-RPC error response 00:12:56.766 response: 00:12:56.766 { 00:12:56.766 "code": -32602, 00:12:56.766 "message": "Invalid cntlid range [0-65519]" 00:12:56.766 }' 00:12:56.766 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:56.766 { 00:12:56.766 "nqn": "nqn.2016-06.io.spdk:cnode15925", 00:12:56.766 "min_cntlid": 0, 00:12:56.766 "method": "nvmf_create_subsystem", 00:12:56.766 "req_id": 1 00:12:56.766 } 00:12:56.766 Got JSON-RPC error response 00:12:56.766 response: 00:12:56.766 { 00:12:56.766 "code": -32602, 00:12:56.766 "message": "Invalid cntlid range [0-65519]" 00:12:56.766 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.766 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2985 -i 65520 00:12:57.023 [2024-07-25 19:43:06.365852] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2985: invalid cntlid range [65520-65519] 00:12:57.023 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:57.023 { 00:12:57.023 "nqn": "nqn.2016-06.io.spdk:cnode2985", 00:12:57.023 "min_cntlid": 65520, 00:12:57.023 "method": "nvmf_create_subsystem", 00:12:57.023 "req_id": 1 00:12:57.023 } 00:12:57.023 Got JSON-RPC error response 00:12:57.023 response: 00:12:57.023 { 00:12:57.023 "code": -32602, 00:12:57.023 "message": "Invalid cntlid range [65520-65519]" 00:12:57.023 }' 00:12:57.023 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:57.023 { 00:12:57.023 "nqn": "nqn.2016-06.io.spdk:cnode2985", 00:12:57.023 "min_cntlid": 65520, 00:12:57.023 "method": "nvmf_create_subsystem", 00:12:57.023 "req_id": 1 00:12:57.023 } 00:12:57.023 Got JSON-RPC error response 00:12:57.023 response: 00:12:57.023 { 00:12:57.023 "code": -32602, 00:12:57.023 "message": "Invalid cntlid range [65520-65519]" 00:12:57.023 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.023 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9062 -I 0 00:12:57.280 [2024-07-25 19:43:06.606620] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9062: invalid cntlid range [1-0] 00:12:57.281 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:57.281 { 00:12:57.281 "nqn": "nqn.2016-06.io.spdk:cnode9062", 00:12:57.281 "max_cntlid": 0, 00:12:57.281 "method": "nvmf_create_subsystem", 00:12:57.281 "req_id": 1 00:12:57.281 } 00:12:57.281 Got JSON-RPC error response 00:12:57.281 response: 00:12:57.281 { 00:12:57.281 "code": -32602, 00:12:57.281 "message": "Invalid cntlid range [1-0]" 00:12:57.281 }' 00:12:57.281 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:57.281 { 00:12:57.281 "nqn": "nqn.2016-06.io.spdk:cnode9062", 00:12:57.281 "max_cntlid": 0, 00:12:57.281 "method": "nvmf_create_subsystem", 00:12:57.281 "req_id": 1 00:12:57.281 } 00:12:57.281 Got JSON-RPC error response 00:12:57.281 response: 00:12:57.281 { 00:12:57.281 "code": -32602, 00:12:57.281 "message": "Invalid cntlid range [1-0]" 00:12:57.281 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.281 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14059 -I 65520 00:12:57.538 [2024-07-25 19:43:06.855517] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14059: invalid cntlid range [1-65520] 00:12:57.538 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:57.538 { 00:12:57.538 "nqn": "nqn.2016-06.io.spdk:cnode14059", 00:12:57.538 "max_cntlid": 65520, 00:12:57.538 "method": "nvmf_create_subsystem", 00:12:57.538 "req_id": 1 00:12:57.538 } 00:12:57.538 Got JSON-RPC error response 00:12:57.538 response: 00:12:57.538 { 00:12:57.538 "code": -32602, 00:12:57.538 "message": "Invalid cntlid range [1-65520]" 00:12:57.538 }' 00:12:57.538 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:57.538 { 00:12:57.538 "nqn": "nqn.2016-06.io.spdk:cnode14059", 00:12:57.538 "max_cntlid": 65520, 00:12:57.538 "method": "nvmf_create_subsystem", 00:12:57.538 "req_id": 1 00:12:57.538 } 00:12:57.538 Got JSON-RPC error response 00:12:57.538 response: 00:12:57.538 { 00:12:57.538 "code": -32602, 00:12:57.538 "message": "Invalid cntlid range [1-65520]" 00:12:57.538 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.538 19:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15995 -i 6 -I 5 00:12:57.796 [2024-07-25 19:43:07.124425] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15995: invalid cntlid range [6-5] 00:12:57.796 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:57.796 { 00:12:57.796 "nqn": "nqn.2016-06.io.spdk:cnode15995", 00:12:57.796 "min_cntlid": 6, 00:12:57.796 "max_cntlid": 5, 00:12:57.796 "method": "nvmf_create_subsystem", 00:12:57.796 "req_id": 1 00:12:57.796 } 00:12:57.796 Got JSON-RPC error response 00:12:57.796 response: 00:12:57.796 { 00:12:57.796 "code": -32602, 00:12:57.796 "message": "Invalid cntlid range [6-5]" 00:12:57.796 }' 00:12:57.796 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:57.796 { 00:12:57.796 "nqn": "nqn.2016-06.io.spdk:cnode15995", 00:12:57.796 "min_cntlid": 6, 00:12:57.796 "max_cntlid": 5, 00:12:57.796 "method": "nvmf_create_subsystem", 00:12:57.796 "req_id": 1 00:12:57.796 } 00:12:57.796 Got JSON-RPC error response 00:12:57.796 response: 00:12:57.796 { 00:12:57.796 "code": -32602, 00:12:57.796 "message": "Invalid cntlid range [6-5]" 00:12:57.796 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.796 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:58.054 { 00:12:58.054 "name": "foobar", 00:12:58.054 "method": "nvmf_delete_target", 00:12:58.054 "req_id": 1 00:12:58.054 } 00:12:58.054 Got JSON-RPC error response 00:12:58.054 response: 00:12:58.054 { 00:12:58.054 "code": -32602, 00:12:58.054 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:58.054 }' 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:58.054 { 00:12:58.054 "name": "foobar", 00:12:58.054 "method": "nvmf_delete_target", 00:12:58.054 "req_id": 1 00:12:58.054 } 00:12:58.054 Got JSON-RPC error response 00:12:58.054 response: 00:12:58.054 { 00:12:58.054 "code": -32602, 00:12:58.054 "message": "The specified target doesn't exist, cannot delete it." 00:12:58.054 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.054 rmmod nvme_tcp 00:12:58.054 rmmod nvme_fabrics 00:12:58.054 rmmod nvme_keyring 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3912274 ']' 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3912274 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3912274 ']' 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3912274 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3912274 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3912274' 00:12:58.054 killing process with pid 3912274 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3912274 00:12:58.054 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3912274 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.313 19:43:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.216 19:43:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.216 00:13:00.216 real 0m8.450s 00:13:00.216 user 0m19.848s 00:13:00.216 sys 0m2.330s 00:13:00.216 19:43:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.216 19:43:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.216 ************************************ 00:13:00.216 END TEST nvmf_invalid 00:13:00.216 ************************************ 00:13:00.216 19:43:09 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:00.216 19:43:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.216 19:43:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.216 19:43:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.473 ************************************ 00:13:00.473 START TEST nvmf_abort 00:13:00.473 ************************************ 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:00.473 * Looking for test storage... 00:13:00.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.473 19:43:09 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.474 19:43:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.376 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.377 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:13:02.377 00:13:02.377 --- 10.0.0.2 ping statistics --- 00:13:02.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.377 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:13:02.377 00:13:02.377 --- 10.0.0.1 ping statistics --- 00:13:02.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.377 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3914905 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3914905 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3914905 ']' 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.377 19:43:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.636 [2024-07-25 19:43:11.809743] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:13:02.636 [2024-07-25 19:43:11.809822] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.636 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.636 [2024-07-25 19:43:11.881365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.636 [2024-07-25 19:43:11.974441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.636 [2024-07-25 19:43:11.974506] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.636 [2024-07-25 19:43:11.974523] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.636 [2024-07-25 19:43:11.974545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.636 [2024-07-25 19:43:11.974558] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.637 [2024-07-25 19:43:11.974654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.637 [2024-07-25 19:43:11.974712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.637 [2024-07-25 19:43:11.974715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 [2024-07-25 19:43:12.121297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 Malloc0 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 Delay0 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 [2024-07-25 19:43:12.192022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.897 19:43:12 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:02.897 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.897 [2024-07-25 19:43:12.297128] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:05.428 Initializing NVMe Controllers 00:13:05.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:05.428 controller IO queue size 128 less than required 00:13:05.428 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:05.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:05.428 Initialization complete. Launching workers. 00:13:05.428 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33834 00:13:05.428 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33895, failed to submit 62 00:13:05.428 success 33838, unsuccess 57, failed 0 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.428 rmmod nvme_tcp 00:13:05.428 rmmod nvme_fabrics 00:13:05.428 rmmod nvme_keyring 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3914905 ']' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3914905 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3914905 ']' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3914905 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3914905 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3914905' 00:13:05.428 killing process with pid 3914905 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3914905 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3914905 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.428 19:43:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.333 19:43:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.333 00:13:07.333 real 0m7.064s 00:13:07.333 user 0m10.215s 00:13:07.333 sys 0m2.423s 00:13:07.333 19:43:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:07.333 19:43:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:07.333 ************************************ 00:13:07.333 END TEST nvmf_abort 00:13:07.333 ************************************ 00:13:07.333 19:43:16 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:07.333 19:43:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:07.333 19:43:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:07.333 19:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:07.590 ************************************ 00:13:07.590 START TEST nvmf_ns_hotplug_stress 00:13:07.590 ************************************ 00:13:07.590 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:07.590 * Looking for test storage... 00:13:07.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.590 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.590 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.591 19:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:09.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:09.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:09.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:09.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.489 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.747 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.747 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.747 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.747 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.747 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.747 19:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.747 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:13:09.747 00:13:09.747 --- 10.0.0.2 ping statistics --- 00:13:09.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.747 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:13:09.747 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:13:09.748 00:13:09.748 --- 10.0.0.1 ping statistics --- 00:13:09.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.748 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3917119 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3917119 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3917119 ']' 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.748 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.748 [2024-07-25 19:43:19.097275] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:13:09.748 [2024-07-25 19:43:19.097377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.748 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.748 [2024-07-25 19:43:19.168960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.005 [2024-07-25 19:43:19.258153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.005 [2024-07-25 19:43:19.258222] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.005 [2024-07-25 19:43:19.258236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.005 [2024-07-25 19:43:19.258248] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.005 [2024-07-25 19:43:19.258257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.005 [2024-07-25 19:43:19.258350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.005 [2024-07-25 19:43:19.258409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.005 [2024-07-25 19:43:19.258411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:10.005 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:10.262 [2024-07-25 19:43:19.655472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.262 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:10.520 19:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.777 [2024-07-25 19:43:20.154110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.777 19:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.034 19:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:11.291 Malloc0 00:13:11.291 19:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:11.548 Delay0 00:13:11.548 19:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.806 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:12.063 NULL1 00:13:12.063 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:12.320 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3917419 00:13:12.320 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:12.320 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:12.320 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.320 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.576 19:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.833 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:12.833 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:13.091 true 00:13:13.091 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:13.091 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.348 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.605 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:13.605 19:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:13.862 true 00:13:13.862 19:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:13.862 19:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.795 Read completed with error (sct=0, sc=11) 00:13:14.795 19:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.053 19:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:15.053 19:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:15.340 true 00:13:15.340 19:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:15.340 19:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.340 19:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.603 19:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:15.603 19:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:15.862 true 00:13:15.862 19:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:15.862 19:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 19:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.242 19:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:17.242 19:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:17.499 true 00:13:17.499 19:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:17.499 19:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.434 19:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.699 19:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:18.699 19:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:18.699 true 00:13:18.699 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:18.699 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.957 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.213 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:19.213 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:19.469 true 00:13:19.469 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:19.469 19:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.400 19:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.967 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:20.967 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:20.967 true 00:13:20.967 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:21.225 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.485 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.742 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:21.742 19:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:21.742 true 00:13:21.742 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:21.742 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.001 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.260 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:22.260 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:22.520 true 00:13:22.520 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:22.520 19:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.894 19:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.894 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:23.894 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:24.152 true 00:13:24.152 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:24.152 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.409 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.666 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:24.666 19:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:24.924 true 00:13:24.924 19:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:24.924 19:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.858 19:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.858 19:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:25.858 19:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:26.115 true 00:13:26.115 19:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:26.115 19:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.682 19:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.682 19:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:26.682 19:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:26.939 true 00:13:26.939 19:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:26.939 19:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.875 19:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.132 19:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:28.132 19:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:28.390 true 00:13:28.390 19:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:28.390 19:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.648 19:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.906 19:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:28.906 19:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:29.164 true 00:13:29.164 19:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:29.164 19:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.100 19:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.100 19:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:30.100 19:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:30.358 true 00:13:30.358 19:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:30.358 19:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.617 19:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.875 19:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:30.875 19:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:31.132 true 00:13:31.133 19:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:31.133 19:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.100 19:43:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.358 19:43:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:32.358 19:43:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:32.616 true 00:13:32.616 19:43:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:32.616 19:43:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.876 19:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.134 19:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:33.134 19:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:33.134 true 00:13:33.134 19:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:33.134 19:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.076 19:43:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.341 19:43:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:34.341 19:43:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:34.599 true 00:13:34.599 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:34.599 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.164 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.164 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:35.164 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:35.422 true 00:13:35.422 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:35.422 19:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.680 19:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.938 19:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:35.938 19:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:36.196 true 00:13:36.196 19:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:36.196 19:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.131 19:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.389 19:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:37.389 19:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:37.648 true 00:13:37.648 19:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:37.648 19:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.585 19:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.842 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:38.842 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:39.100 true 00:13:39.100 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:39.100 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.357 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.615 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:39.615 19:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:39.872 true 00:13:39.872 19:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:39.872 19:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.808 19:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.808 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:40.808 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:41.065 true 00:13:41.065 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:41.065 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.323 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.581 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:41.581 19:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:41.839 true 00:13:41.839 19:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:41.839 19:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.774 Initializing NVMe Controllers 00:13:42.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:42.774 Controller IO queue size 128, less than required. 00:13:42.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:42.774 Controller IO queue size 128, less than required. 00:13:42.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:42.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:42.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:42.774 Initialization complete. Launching workers. 00:13:42.774 ======================================================== 00:13:42.774 Latency(us) 00:13:42.774 Device Information : IOPS MiB/s Average min max 00:13:42.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1014.22 0.50 66593.19 2978.50 1022119.96 00:13:42.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11298.75 5.52 11295.59 3089.64 362080.85 00:13:42.774 ======================================================== 00:13:42.774 Total : 12312.97 6.01 15850.46 2978.50 1022119.96 00:13:42.774 00:13:42.774 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.031 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:43.031 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:43.288 true 00:13:43.288 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3917419 00:13:43.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3917419) - No such process 00:13:43.288 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3917419 00:13:43.288 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.545 19:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.802 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:43.802 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:43.802 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:43.802 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:43.802 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:44.059 null0 00:13:44.059 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.059 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.059 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:44.315 null1 00:13:44.315 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.315 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.315 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:44.573 null2 00:13:44.573 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.573 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.573 19:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:44.829 null3 00:13:44.829 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.830 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.830 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:45.086 null4 00:13:45.086 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.086 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.086 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:45.343 null5 00:13:45.343 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.343 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.343 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:45.601 null6 00:13:45.601 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.601 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.601 19:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:45.858 null7 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.858 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3921459 3921460 3921462 3921464 3921466 3921468 3921470 3921472 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.859 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:46.116 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.373 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:46.630 19:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.630 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.888 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.889 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:46.889 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.889 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.889 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.147 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.405 19:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.685 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.685 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.685 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.685 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.685 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.944 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.203 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.460 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.718 19:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.976 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.234 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.492 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.750 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.007 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.264 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.265 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.523 19:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.780 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.781 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.038 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.039 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.297 rmmod nvme_tcp 00:13:51.297 rmmod nvme_fabrics 00:13:51.297 rmmod nvme_keyring 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3917119 ']' 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3917119 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3917119 ']' 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3917119 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.297 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917119 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917119' 00:13:51.557 killing process with pid 3917119 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3917119 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3917119 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.557 19:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.084 19:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.084 00:13:54.084 real 0m46.234s 00:13:54.084 user 3m31.182s 00:13:54.084 sys 0m15.889s 00:13:54.084 19:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.084 19:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.084 ************************************ 00:13:54.084 END TEST nvmf_ns_hotplug_stress 00:13:54.084 ************************************ 00:13:54.084 19:44:03 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:54.084 19:44:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:54.084 19:44:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:54.084 19:44:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.084 ************************************ 00:13:54.084 START TEST nvmf_connect_stress 00:13:54.084 ************************************ 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:54.084 * Looking for test storage... 00:13:54.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.084 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.085 19:44:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.982 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:55.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:55.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:55.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:55.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:13:55.983 00:13:55.983 --- 10.0.0.2 ping statistics --- 00:13:55.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.983 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:55.983 00:13:55.983 --- 10.0.0.1 ping statistics --- 00:13:55.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.983 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3924221 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3924221 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3924221 ']' 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.983 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.983 [2024-07-25 19:44:05.277207] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:13:55.984 [2024-07-25 19:44:05.277295] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.984 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.984 [2024-07-25 19:44:05.348997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.242 [2024-07-25 19:44:05.442312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.242 [2024-07-25 19:44:05.442379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.242 [2024-07-25 19:44:05.442405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.242 [2024-07-25 19:44:05.442419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.242 [2024-07-25 19:44:05.442439] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.242 [2024-07-25 19:44:05.442542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.242 [2024-07-25 19:44:05.445076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.242 [2024-07-25 19:44:05.445088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.242 [2024-07-25 19:44:05.591004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.242 [2024-07-25 19:44:05.618223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.242 NULL1 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3924364 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.242 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.243 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.500 19:44:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.756 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.756 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:56.756 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.756 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.756 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.013 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.013 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:57.013 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.013 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.013 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.271 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.271 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:57.271 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.271 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.271 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.835 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.835 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:57.835 19:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.835 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.835 19:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.092 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.092 19:44:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:58.092 19:44:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.092 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.092 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.349 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.349 19:44:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:58.349 19:44:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.349 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.349 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.605 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.605 19:44:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:58.605 19:44:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.605 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.605 19:44:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.862 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.862 19:44:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:58.862 19:44:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.862 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.862 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.427 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.427 19:44:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:59.427 19:44:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.427 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.427 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.684 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.684 19:44:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:59.684 19:44:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.684 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.684 19:44:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.941 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.941 19:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:13:59.941 19:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.941 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.941 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.198 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.198 19:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:00.198 19:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.198 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.198 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.455 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.455 19:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:00.455 19:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.455 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.455 19:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.030 19:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:01.030 19:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.030 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.030 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.287 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.287 19:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:01.287 19:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.287 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.287 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.544 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.544 19:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:01.544 19:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.544 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.544 19:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.801 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.801 19:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:01.801 19:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.801 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.801 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.059 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.059 19:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:02.059 19:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.059 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.059 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.623 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.623 19:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:02.623 19:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.623 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.623 19:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.879 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.879 19:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:02.879 19:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.879 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.879 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.135 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.135 19:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:03.135 19:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.135 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.135 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.392 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.392 19:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:03.392 19:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.392 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.392 19:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.649 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.649 19:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:03.649 19:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.649 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.649 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.258 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.258 19:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:04.258 19:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.258 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.258 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.527 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.527 19:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:04.527 19:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.527 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.527 19:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.784 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.784 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:04.784 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.784 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.784 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.041 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.041 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:05.041 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.041 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.041 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.298 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.298 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:05.298 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.298 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.298 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.862 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.862 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:05.862 19:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.862 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.862 19:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.119 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.119 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:06.119 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.119 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.119 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.377 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.377 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:06.377 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.377 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.377 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.377 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3924364 00:14:06.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3924364) - No such process 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3924364 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.634 19:44:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.634 rmmod nvme_tcp 00:14:06.634 rmmod nvme_fabrics 00:14:06.634 rmmod nvme_keyring 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3924221 ']' 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3924221 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3924221 ']' 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3924221 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924221 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924221' 00:14:06.634 killing process with pid 3924221 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3924221 00:14:06.634 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3924221 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.892 19:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.424 19:44:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.424 00:14:09.424 real 0m15.256s 00:14:09.424 user 0m38.256s 00:14:09.424 sys 0m5.867s 00:14:09.424 19:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.424 19:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.424 ************************************ 00:14:09.424 END TEST nvmf_connect_stress 00:14:09.424 ************************************ 00:14:09.424 19:44:18 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:09.424 19:44:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.424 19:44:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.424 19:44:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.424 ************************************ 00:14:09.424 START TEST nvmf_fused_ordering 00:14:09.424 ************************************ 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:09.424 * Looking for test storage... 00:14:09.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.424 19:44:18 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.425 19:44:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:11.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:11.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:11.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:11.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.324 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:14:11.325 00:14:11.325 --- 10.0.0.2 ping statistics --- 00:14:11.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.325 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:14:11.325 00:14:11.325 --- 10.0.0.1 ping statistics --- 00:14:11.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.325 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3927508 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3927508 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3927508 ']' 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.325 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.325 [2024-07-25 19:44:20.627362] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:11.325 [2024-07-25 19:44:20.627460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.325 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.325 [2024-07-25 19:44:20.691468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.583 [2024-07-25 19:44:20.775306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.583 [2024-07-25 19:44:20.775369] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.583 [2024-07-25 19:44:20.775400] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.583 [2024-07-25 19:44:20.775412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.583 [2024-07-25 19:44:20.775422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.583 [2024-07-25 19:44:20.775461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 [2024-07-25 19:44:20.917727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 [2024-07-25 19:44:20.933948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 NULL1 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.583 19:44:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:11.583 [2024-07-25 19:44:20.978525] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:11.583 [2024-07-25 19:44:20.978571] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927533 ] 00:14:11.583 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.147 Attached to nqn.2016-06.io.spdk:cnode1 00:14:12.147 Namespace ID: 1 size: 1GB 00:14:12.147 fused_ordering(0) 00:14:12.147 fused_ordering(1) 00:14:12.147 fused_ordering(2) 00:14:12.147 fused_ordering(3) 00:14:12.147 fused_ordering(4) 00:14:12.147 fused_ordering(5) 00:14:12.147 fused_ordering(6) 00:14:12.147 fused_ordering(7) 00:14:12.147 fused_ordering(8) 00:14:12.147 fused_ordering(9) 00:14:12.147 fused_ordering(10) 00:14:12.147 fused_ordering(11) 00:14:12.147 fused_ordering(12) 00:14:12.147 fused_ordering(13) 00:14:12.147 fused_ordering(14) 00:14:12.147 fused_ordering(15) 00:14:12.147 fused_ordering(16) 00:14:12.147 fused_ordering(17) 00:14:12.147 fused_ordering(18) 00:14:12.147 fused_ordering(19) 00:14:12.147 fused_ordering(20) 00:14:12.147 fused_ordering(21) 00:14:12.147 fused_ordering(22) 00:14:12.147 fused_ordering(23) 00:14:12.147 fused_ordering(24) 00:14:12.147 fused_ordering(25) 00:14:12.147 fused_ordering(26) 00:14:12.147 fused_ordering(27) 00:14:12.147 fused_ordering(28) 00:14:12.147 fused_ordering(29) 00:14:12.147 fused_ordering(30) 00:14:12.147 fused_ordering(31) 00:14:12.147 fused_ordering(32) 00:14:12.147 fused_ordering(33) 00:14:12.147 fused_ordering(34) 00:14:12.147 fused_ordering(35) 00:14:12.147 fused_ordering(36) 00:14:12.148 fused_ordering(37) 00:14:12.148 fused_ordering(38) 00:14:12.148 fused_ordering(39) 00:14:12.148 fused_ordering(40) 00:14:12.148 fused_ordering(41) 00:14:12.148 fused_ordering(42) 00:14:12.148 fused_ordering(43) 00:14:12.148 fused_ordering(44) 00:14:12.148 fused_ordering(45) 00:14:12.148 fused_ordering(46) 00:14:12.148 fused_ordering(47) 00:14:12.148 fused_ordering(48) 00:14:12.148 fused_ordering(49) 00:14:12.148 fused_ordering(50) 00:14:12.148 fused_ordering(51) 00:14:12.148 fused_ordering(52) 00:14:12.148 fused_ordering(53) 00:14:12.148 fused_ordering(54) 00:14:12.148 fused_ordering(55) 00:14:12.148 fused_ordering(56) 00:14:12.148 fused_ordering(57) 00:14:12.148 fused_ordering(58) 00:14:12.148 fused_ordering(59) 00:14:12.148 fused_ordering(60) 00:14:12.148 fused_ordering(61) 00:14:12.148 fused_ordering(62) 00:14:12.148 fused_ordering(63) 00:14:12.148 fused_ordering(64) 00:14:12.148 fused_ordering(65) 00:14:12.148 fused_ordering(66) 00:14:12.148 fused_ordering(67) 00:14:12.148 fused_ordering(68) 00:14:12.148 fused_ordering(69) 00:14:12.148 fused_ordering(70) 00:14:12.148 fused_ordering(71) 00:14:12.148 fused_ordering(72) 00:14:12.148 fused_ordering(73) 00:14:12.148 fused_ordering(74) 00:14:12.148 fused_ordering(75) 00:14:12.148 fused_ordering(76) 00:14:12.148 fused_ordering(77) 00:14:12.148 fused_ordering(78) 00:14:12.148 fused_ordering(79) 00:14:12.148 fused_ordering(80) 00:14:12.148 fused_ordering(81) 00:14:12.148 fused_ordering(82) 00:14:12.148 fused_ordering(83) 00:14:12.148 fused_ordering(84) 00:14:12.148 fused_ordering(85) 00:14:12.148 fused_ordering(86) 00:14:12.148 fused_ordering(87) 00:14:12.148 fused_ordering(88) 00:14:12.148 fused_ordering(89) 00:14:12.148 fused_ordering(90) 00:14:12.148 fused_ordering(91) 00:14:12.148 fused_ordering(92) 00:14:12.148 fused_ordering(93) 00:14:12.148 fused_ordering(94) 00:14:12.148 fused_ordering(95) 00:14:12.148 fused_ordering(96) 00:14:12.148 fused_ordering(97) 00:14:12.148 fused_ordering(98) 00:14:12.148 fused_ordering(99) 00:14:12.148 fused_ordering(100) 00:14:12.148 fused_ordering(101) 00:14:12.148 fused_ordering(102) 00:14:12.148 fused_ordering(103) 00:14:12.148 fused_ordering(104) 00:14:12.148 fused_ordering(105) 00:14:12.148 fused_ordering(106) 00:14:12.148 fused_ordering(107) 00:14:12.148 fused_ordering(108) 00:14:12.148 fused_ordering(109) 00:14:12.148 fused_ordering(110) 00:14:12.148 fused_ordering(111) 00:14:12.148 fused_ordering(112) 00:14:12.148 fused_ordering(113) 00:14:12.148 fused_ordering(114) 00:14:12.148 fused_ordering(115) 00:14:12.148 fused_ordering(116) 00:14:12.148 fused_ordering(117) 00:14:12.148 fused_ordering(118) 00:14:12.148 fused_ordering(119) 00:14:12.148 fused_ordering(120) 00:14:12.148 fused_ordering(121) 00:14:12.148 fused_ordering(122) 00:14:12.148 fused_ordering(123) 00:14:12.148 fused_ordering(124) 00:14:12.148 fused_ordering(125) 00:14:12.148 fused_ordering(126) 00:14:12.148 fused_ordering(127) 00:14:12.148 fused_ordering(128) 00:14:12.148 fused_ordering(129) 00:14:12.148 fused_ordering(130) 00:14:12.148 fused_ordering(131) 00:14:12.148 fused_ordering(132) 00:14:12.148 fused_ordering(133) 00:14:12.148 fused_ordering(134) 00:14:12.148 fused_ordering(135) 00:14:12.148 fused_ordering(136) 00:14:12.148 fused_ordering(137) 00:14:12.148 fused_ordering(138) 00:14:12.148 fused_ordering(139) 00:14:12.148 fused_ordering(140) 00:14:12.148 fused_ordering(141) 00:14:12.148 fused_ordering(142) 00:14:12.148 fused_ordering(143) 00:14:12.148 fused_ordering(144) 00:14:12.148 fused_ordering(145) 00:14:12.148 fused_ordering(146) 00:14:12.148 fused_ordering(147) 00:14:12.148 fused_ordering(148) 00:14:12.148 fused_ordering(149) 00:14:12.148 fused_ordering(150) 00:14:12.148 fused_ordering(151) 00:14:12.148 fused_ordering(152) 00:14:12.148 fused_ordering(153) 00:14:12.148 fused_ordering(154) 00:14:12.148 fused_ordering(155) 00:14:12.148 fused_ordering(156) 00:14:12.148 fused_ordering(157) 00:14:12.148 fused_ordering(158) 00:14:12.148 fused_ordering(159) 00:14:12.148 fused_ordering(160) 00:14:12.148 fused_ordering(161) 00:14:12.148 fused_ordering(162) 00:14:12.148 fused_ordering(163) 00:14:12.148 fused_ordering(164) 00:14:12.148 fused_ordering(165) 00:14:12.148 fused_ordering(166) 00:14:12.148 fused_ordering(167) 00:14:12.148 fused_ordering(168) 00:14:12.148 fused_ordering(169) 00:14:12.148 fused_ordering(170) 00:14:12.148 fused_ordering(171) 00:14:12.148 fused_ordering(172) 00:14:12.148 fused_ordering(173) 00:14:12.148 fused_ordering(174) 00:14:12.148 fused_ordering(175) 00:14:12.148 fused_ordering(176) 00:14:12.148 fused_ordering(177) 00:14:12.148 fused_ordering(178) 00:14:12.148 fused_ordering(179) 00:14:12.148 fused_ordering(180) 00:14:12.148 fused_ordering(181) 00:14:12.148 fused_ordering(182) 00:14:12.148 fused_ordering(183) 00:14:12.148 fused_ordering(184) 00:14:12.148 fused_ordering(185) 00:14:12.148 fused_ordering(186) 00:14:12.148 fused_ordering(187) 00:14:12.148 fused_ordering(188) 00:14:12.148 fused_ordering(189) 00:14:12.148 fused_ordering(190) 00:14:12.148 fused_ordering(191) 00:14:12.148 fused_ordering(192) 00:14:12.148 fused_ordering(193) 00:14:12.148 fused_ordering(194) 00:14:12.148 fused_ordering(195) 00:14:12.148 fused_ordering(196) 00:14:12.148 fused_ordering(197) 00:14:12.148 fused_ordering(198) 00:14:12.148 fused_ordering(199) 00:14:12.148 fused_ordering(200) 00:14:12.148 fused_ordering(201) 00:14:12.148 fused_ordering(202) 00:14:12.148 fused_ordering(203) 00:14:12.148 fused_ordering(204) 00:14:12.148 fused_ordering(205) 00:14:12.405 fused_ordering(206) 00:14:12.405 fused_ordering(207) 00:14:12.405 fused_ordering(208) 00:14:12.405 fused_ordering(209) 00:14:12.405 fused_ordering(210) 00:14:12.405 fused_ordering(211) 00:14:12.405 fused_ordering(212) 00:14:12.405 fused_ordering(213) 00:14:12.405 fused_ordering(214) 00:14:12.405 fused_ordering(215) 00:14:12.405 fused_ordering(216) 00:14:12.405 fused_ordering(217) 00:14:12.405 fused_ordering(218) 00:14:12.405 fused_ordering(219) 00:14:12.405 fused_ordering(220) 00:14:12.405 fused_ordering(221) 00:14:12.405 fused_ordering(222) 00:14:12.405 fused_ordering(223) 00:14:12.405 fused_ordering(224) 00:14:12.405 fused_ordering(225) 00:14:12.405 fused_ordering(226) 00:14:12.405 fused_ordering(227) 00:14:12.405 fused_ordering(228) 00:14:12.405 fused_ordering(229) 00:14:12.405 fused_ordering(230) 00:14:12.405 fused_ordering(231) 00:14:12.405 fused_ordering(232) 00:14:12.405 fused_ordering(233) 00:14:12.405 fused_ordering(234) 00:14:12.405 fused_ordering(235) 00:14:12.405 fused_ordering(236) 00:14:12.405 fused_ordering(237) 00:14:12.405 fused_ordering(238) 00:14:12.405 fused_ordering(239) 00:14:12.405 fused_ordering(240) 00:14:12.405 fused_ordering(241) 00:14:12.405 fused_ordering(242) 00:14:12.405 fused_ordering(243) 00:14:12.405 fused_ordering(244) 00:14:12.405 fused_ordering(245) 00:14:12.405 fused_ordering(246) 00:14:12.405 fused_ordering(247) 00:14:12.405 fused_ordering(248) 00:14:12.405 fused_ordering(249) 00:14:12.405 fused_ordering(250) 00:14:12.405 fused_ordering(251) 00:14:12.405 fused_ordering(252) 00:14:12.405 fused_ordering(253) 00:14:12.405 fused_ordering(254) 00:14:12.405 fused_ordering(255) 00:14:12.405 fused_ordering(256) 00:14:12.405 fused_ordering(257) 00:14:12.405 fused_ordering(258) 00:14:12.405 fused_ordering(259) 00:14:12.405 fused_ordering(260) 00:14:12.405 fused_ordering(261) 00:14:12.405 fused_ordering(262) 00:14:12.405 fused_ordering(263) 00:14:12.405 fused_ordering(264) 00:14:12.405 fused_ordering(265) 00:14:12.405 fused_ordering(266) 00:14:12.405 fused_ordering(267) 00:14:12.405 fused_ordering(268) 00:14:12.405 fused_ordering(269) 00:14:12.405 fused_ordering(270) 00:14:12.405 fused_ordering(271) 00:14:12.405 fused_ordering(272) 00:14:12.405 fused_ordering(273) 00:14:12.405 fused_ordering(274) 00:14:12.405 fused_ordering(275) 00:14:12.406 fused_ordering(276) 00:14:12.406 fused_ordering(277) 00:14:12.406 fused_ordering(278) 00:14:12.406 fused_ordering(279) 00:14:12.406 fused_ordering(280) 00:14:12.406 fused_ordering(281) 00:14:12.406 fused_ordering(282) 00:14:12.406 fused_ordering(283) 00:14:12.406 fused_ordering(284) 00:14:12.406 fused_ordering(285) 00:14:12.406 fused_ordering(286) 00:14:12.406 fused_ordering(287) 00:14:12.406 fused_ordering(288) 00:14:12.406 fused_ordering(289) 00:14:12.406 fused_ordering(290) 00:14:12.406 fused_ordering(291) 00:14:12.406 fused_ordering(292) 00:14:12.406 fused_ordering(293) 00:14:12.406 fused_ordering(294) 00:14:12.406 fused_ordering(295) 00:14:12.406 fused_ordering(296) 00:14:12.406 fused_ordering(297) 00:14:12.406 fused_ordering(298) 00:14:12.406 fused_ordering(299) 00:14:12.406 fused_ordering(300) 00:14:12.406 fused_ordering(301) 00:14:12.406 fused_ordering(302) 00:14:12.406 fused_ordering(303) 00:14:12.406 fused_ordering(304) 00:14:12.406 fused_ordering(305) 00:14:12.406 fused_ordering(306) 00:14:12.406 fused_ordering(307) 00:14:12.406 fused_ordering(308) 00:14:12.406 fused_ordering(309) 00:14:12.406 fused_ordering(310) 00:14:12.406 fused_ordering(311) 00:14:12.406 fused_ordering(312) 00:14:12.406 fused_ordering(313) 00:14:12.406 fused_ordering(314) 00:14:12.406 fused_ordering(315) 00:14:12.406 fused_ordering(316) 00:14:12.406 fused_ordering(317) 00:14:12.406 fused_ordering(318) 00:14:12.406 fused_ordering(319) 00:14:12.406 fused_ordering(320) 00:14:12.406 fused_ordering(321) 00:14:12.406 fused_ordering(322) 00:14:12.406 fused_ordering(323) 00:14:12.406 fused_ordering(324) 00:14:12.406 fused_ordering(325) 00:14:12.406 fused_ordering(326) 00:14:12.406 fused_ordering(327) 00:14:12.406 fused_ordering(328) 00:14:12.406 fused_ordering(329) 00:14:12.406 fused_ordering(330) 00:14:12.406 fused_ordering(331) 00:14:12.406 fused_ordering(332) 00:14:12.406 fused_ordering(333) 00:14:12.406 fused_ordering(334) 00:14:12.406 fused_ordering(335) 00:14:12.406 fused_ordering(336) 00:14:12.406 fused_ordering(337) 00:14:12.406 fused_ordering(338) 00:14:12.406 fused_ordering(339) 00:14:12.406 fused_ordering(340) 00:14:12.406 fused_ordering(341) 00:14:12.406 fused_ordering(342) 00:14:12.406 fused_ordering(343) 00:14:12.406 fused_ordering(344) 00:14:12.406 fused_ordering(345) 00:14:12.406 fused_ordering(346) 00:14:12.406 fused_ordering(347) 00:14:12.406 fused_ordering(348) 00:14:12.406 fused_ordering(349) 00:14:12.406 fused_ordering(350) 00:14:12.406 fused_ordering(351) 00:14:12.406 fused_ordering(352) 00:14:12.406 fused_ordering(353) 00:14:12.406 fused_ordering(354) 00:14:12.406 fused_ordering(355) 00:14:12.406 fused_ordering(356) 00:14:12.406 fused_ordering(357) 00:14:12.406 fused_ordering(358) 00:14:12.406 fused_ordering(359) 00:14:12.406 fused_ordering(360) 00:14:12.406 fused_ordering(361) 00:14:12.406 fused_ordering(362) 00:14:12.406 fused_ordering(363) 00:14:12.406 fused_ordering(364) 00:14:12.406 fused_ordering(365) 00:14:12.406 fused_ordering(366) 00:14:12.406 fused_ordering(367) 00:14:12.406 fused_ordering(368) 00:14:12.406 fused_ordering(369) 00:14:12.406 fused_ordering(370) 00:14:12.406 fused_ordering(371) 00:14:12.406 fused_ordering(372) 00:14:12.406 fused_ordering(373) 00:14:12.406 fused_ordering(374) 00:14:12.406 fused_ordering(375) 00:14:12.406 fused_ordering(376) 00:14:12.406 fused_ordering(377) 00:14:12.406 fused_ordering(378) 00:14:12.406 fused_ordering(379) 00:14:12.406 fused_ordering(380) 00:14:12.406 fused_ordering(381) 00:14:12.406 fused_ordering(382) 00:14:12.406 fused_ordering(383) 00:14:12.406 fused_ordering(384) 00:14:12.406 fused_ordering(385) 00:14:12.406 fused_ordering(386) 00:14:12.406 fused_ordering(387) 00:14:12.406 fused_ordering(388) 00:14:12.406 fused_ordering(389) 00:14:12.406 fused_ordering(390) 00:14:12.406 fused_ordering(391) 00:14:12.406 fused_ordering(392) 00:14:12.406 fused_ordering(393) 00:14:12.406 fused_ordering(394) 00:14:12.406 fused_ordering(395) 00:14:12.406 fused_ordering(396) 00:14:12.406 fused_ordering(397) 00:14:12.406 fused_ordering(398) 00:14:12.406 fused_ordering(399) 00:14:12.406 fused_ordering(400) 00:14:12.406 fused_ordering(401) 00:14:12.406 fused_ordering(402) 00:14:12.406 fused_ordering(403) 00:14:12.406 fused_ordering(404) 00:14:12.406 fused_ordering(405) 00:14:12.406 fused_ordering(406) 00:14:12.406 fused_ordering(407) 00:14:12.406 fused_ordering(408) 00:14:12.406 fused_ordering(409) 00:14:12.406 fused_ordering(410) 00:14:12.970 fused_ordering(411) 00:14:12.970 fused_ordering(412) 00:14:12.970 fused_ordering(413) 00:14:12.970 fused_ordering(414) 00:14:12.970 fused_ordering(415) 00:14:12.970 fused_ordering(416) 00:14:12.970 fused_ordering(417) 00:14:12.970 fused_ordering(418) 00:14:12.970 fused_ordering(419) 00:14:12.970 fused_ordering(420) 00:14:12.970 fused_ordering(421) 00:14:12.970 fused_ordering(422) 00:14:12.970 fused_ordering(423) 00:14:12.970 fused_ordering(424) 00:14:12.970 fused_ordering(425) 00:14:12.970 fused_ordering(426) 00:14:12.970 fused_ordering(427) 00:14:12.970 fused_ordering(428) 00:14:12.970 fused_ordering(429) 00:14:12.970 fused_ordering(430) 00:14:12.970 fused_ordering(431) 00:14:12.970 fused_ordering(432) 00:14:12.970 fused_ordering(433) 00:14:12.970 fused_ordering(434) 00:14:12.970 fused_ordering(435) 00:14:12.970 fused_ordering(436) 00:14:12.970 fused_ordering(437) 00:14:12.970 fused_ordering(438) 00:14:12.970 fused_ordering(439) 00:14:12.970 fused_ordering(440) 00:14:12.970 fused_ordering(441) 00:14:12.970 fused_ordering(442) 00:14:12.970 fused_ordering(443) 00:14:12.970 fused_ordering(444) 00:14:12.970 fused_ordering(445) 00:14:12.970 fused_ordering(446) 00:14:12.970 fused_ordering(447) 00:14:12.970 fused_ordering(448) 00:14:12.970 fused_ordering(449) 00:14:12.970 fused_ordering(450) 00:14:12.970 fused_ordering(451) 00:14:12.970 fused_ordering(452) 00:14:12.970 fused_ordering(453) 00:14:12.970 fused_ordering(454) 00:14:12.970 fused_ordering(455) 00:14:12.970 fused_ordering(456) 00:14:12.970 fused_ordering(457) 00:14:12.970 fused_ordering(458) 00:14:12.970 fused_ordering(459) 00:14:12.970 fused_ordering(460) 00:14:12.970 fused_ordering(461) 00:14:12.970 fused_ordering(462) 00:14:12.970 fused_ordering(463) 00:14:12.970 fused_ordering(464) 00:14:12.970 fused_ordering(465) 00:14:12.970 fused_ordering(466) 00:14:12.970 fused_ordering(467) 00:14:12.970 fused_ordering(468) 00:14:12.970 fused_ordering(469) 00:14:12.970 fused_ordering(470) 00:14:12.970 fused_ordering(471) 00:14:12.970 fused_ordering(472) 00:14:12.970 fused_ordering(473) 00:14:12.970 fused_ordering(474) 00:14:12.970 fused_ordering(475) 00:14:12.970 fused_ordering(476) 00:14:12.970 fused_ordering(477) 00:14:12.970 fused_ordering(478) 00:14:12.970 fused_ordering(479) 00:14:12.970 fused_ordering(480) 00:14:12.970 fused_ordering(481) 00:14:12.970 fused_ordering(482) 00:14:12.970 fused_ordering(483) 00:14:12.970 fused_ordering(484) 00:14:12.970 fused_ordering(485) 00:14:12.970 fused_ordering(486) 00:14:12.970 fused_ordering(487) 00:14:12.970 fused_ordering(488) 00:14:12.970 fused_ordering(489) 00:14:12.970 fused_ordering(490) 00:14:12.970 fused_ordering(491) 00:14:12.970 fused_ordering(492) 00:14:12.970 fused_ordering(493) 00:14:12.970 fused_ordering(494) 00:14:12.970 fused_ordering(495) 00:14:12.970 fused_ordering(496) 00:14:12.970 fused_ordering(497) 00:14:12.970 fused_ordering(498) 00:14:12.970 fused_ordering(499) 00:14:12.970 fused_ordering(500) 00:14:12.970 fused_ordering(501) 00:14:12.970 fused_ordering(502) 00:14:12.970 fused_ordering(503) 00:14:12.970 fused_ordering(504) 00:14:12.970 fused_ordering(505) 00:14:12.970 fused_ordering(506) 00:14:12.970 fused_ordering(507) 00:14:12.970 fused_ordering(508) 00:14:12.970 fused_ordering(509) 00:14:12.970 fused_ordering(510) 00:14:12.970 fused_ordering(511) 00:14:12.970 fused_ordering(512) 00:14:12.970 fused_ordering(513) 00:14:12.970 fused_ordering(514) 00:14:12.970 fused_ordering(515) 00:14:12.970 fused_ordering(516) 00:14:12.970 fused_ordering(517) 00:14:12.970 fused_ordering(518) 00:14:12.970 fused_ordering(519) 00:14:12.970 fused_ordering(520) 00:14:12.971 fused_ordering(521) 00:14:12.971 fused_ordering(522) 00:14:12.971 fused_ordering(523) 00:14:12.971 fused_ordering(524) 00:14:12.971 fused_ordering(525) 00:14:12.971 fused_ordering(526) 00:14:12.971 fused_ordering(527) 00:14:12.971 fused_ordering(528) 00:14:12.971 fused_ordering(529) 00:14:12.971 fused_ordering(530) 00:14:12.971 fused_ordering(531) 00:14:12.971 fused_ordering(532) 00:14:12.971 fused_ordering(533) 00:14:12.971 fused_ordering(534) 00:14:12.971 fused_ordering(535) 00:14:12.971 fused_ordering(536) 00:14:12.971 fused_ordering(537) 00:14:12.971 fused_ordering(538) 00:14:12.971 fused_ordering(539) 00:14:12.971 fused_ordering(540) 00:14:12.971 fused_ordering(541) 00:14:12.971 fused_ordering(542) 00:14:12.971 fused_ordering(543) 00:14:12.971 fused_ordering(544) 00:14:12.971 fused_ordering(545) 00:14:12.971 fused_ordering(546) 00:14:12.971 fused_ordering(547) 00:14:12.971 fused_ordering(548) 00:14:12.971 fused_ordering(549) 00:14:12.971 fused_ordering(550) 00:14:12.971 fused_ordering(551) 00:14:12.971 fused_ordering(552) 00:14:12.971 fused_ordering(553) 00:14:12.971 fused_ordering(554) 00:14:12.971 fused_ordering(555) 00:14:12.971 fused_ordering(556) 00:14:12.971 fused_ordering(557) 00:14:12.971 fused_ordering(558) 00:14:12.971 fused_ordering(559) 00:14:12.971 fused_ordering(560) 00:14:12.971 fused_ordering(561) 00:14:12.971 fused_ordering(562) 00:14:12.971 fused_ordering(563) 00:14:12.971 fused_ordering(564) 00:14:12.971 fused_ordering(565) 00:14:12.971 fused_ordering(566) 00:14:12.971 fused_ordering(567) 00:14:12.971 fused_ordering(568) 00:14:12.971 fused_ordering(569) 00:14:12.971 fused_ordering(570) 00:14:12.971 fused_ordering(571) 00:14:12.971 fused_ordering(572) 00:14:12.971 fused_ordering(573) 00:14:12.971 fused_ordering(574) 00:14:12.971 fused_ordering(575) 00:14:12.971 fused_ordering(576) 00:14:12.971 fused_ordering(577) 00:14:12.971 fused_ordering(578) 00:14:12.971 fused_ordering(579) 00:14:12.971 fused_ordering(580) 00:14:12.971 fused_ordering(581) 00:14:12.971 fused_ordering(582) 00:14:12.971 fused_ordering(583) 00:14:12.971 fused_ordering(584) 00:14:12.971 fused_ordering(585) 00:14:12.971 fused_ordering(586) 00:14:12.971 fused_ordering(587) 00:14:12.971 fused_ordering(588) 00:14:12.971 fused_ordering(589) 00:14:12.971 fused_ordering(590) 00:14:12.971 fused_ordering(591) 00:14:12.971 fused_ordering(592) 00:14:12.971 fused_ordering(593) 00:14:12.971 fused_ordering(594) 00:14:12.971 fused_ordering(595) 00:14:12.971 fused_ordering(596) 00:14:12.971 fused_ordering(597) 00:14:12.971 fused_ordering(598) 00:14:12.971 fused_ordering(599) 00:14:12.971 fused_ordering(600) 00:14:12.971 fused_ordering(601) 00:14:12.971 fused_ordering(602) 00:14:12.971 fused_ordering(603) 00:14:12.971 fused_ordering(604) 00:14:12.971 fused_ordering(605) 00:14:12.971 fused_ordering(606) 00:14:12.971 fused_ordering(607) 00:14:12.971 fused_ordering(608) 00:14:12.971 fused_ordering(609) 00:14:12.971 fused_ordering(610) 00:14:12.971 fused_ordering(611) 00:14:12.971 fused_ordering(612) 00:14:12.971 fused_ordering(613) 00:14:12.971 fused_ordering(614) 00:14:12.971 fused_ordering(615) 00:14:13.534 fused_ordering(616) 00:14:13.534 fused_ordering(617) 00:14:13.534 fused_ordering(618) 00:14:13.534 fused_ordering(619) 00:14:13.534 fused_ordering(620) 00:14:13.534 fused_ordering(621) 00:14:13.534 fused_ordering(622) 00:14:13.534 fused_ordering(623) 00:14:13.534 fused_ordering(624) 00:14:13.534 fused_ordering(625) 00:14:13.534 fused_ordering(626) 00:14:13.534 fused_ordering(627) 00:14:13.534 fused_ordering(628) 00:14:13.534 fused_ordering(629) 00:14:13.534 fused_ordering(630) 00:14:13.534 fused_ordering(631) 00:14:13.534 fused_ordering(632) 00:14:13.534 fused_ordering(633) 00:14:13.534 fused_ordering(634) 00:14:13.534 fused_ordering(635) 00:14:13.534 fused_ordering(636) 00:14:13.534 fused_ordering(637) 00:14:13.534 fused_ordering(638) 00:14:13.534 fused_ordering(639) 00:14:13.534 fused_ordering(640) 00:14:13.534 fused_ordering(641) 00:14:13.534 fused_ordering(642) 00:14:13.534 fused_ordering(643) 00:14:13.534 fused_ordering(644) 00:14:13.534 fused_ordering(645) 00:14:13.534 fused_ordering(646) 00:14:13.534 fused_ordering(647) 00:14:13.534 fused_ordering(648) 00:14:13.534 fused_ordering(649) 00:14:13.534 fused_ordering(650) 00:14:13.534 fused_ordering(651) 00:14:13.534 fused_ordering(652) 00:14:13.534 fused_ordering(653) 00:14:13.534 fused_ordering(654) 00:14:13.534 fused_ordering(655) 00:14:13.534 fused_ordering(656) 00:14:13.534 fused_ordering(657) 00:14:13.534 fused_ordering(658) 00:14:13.534 fused_ordering(659) 00:14:13.534 fused_ordering(660) 00:14:13.534 fused_ordering(661) 00:14:13.534 fused_ordering(662) 00:14:13.534 fused_ordering(663) 00:14:13.534 fused_ordering(664) 00:14:13.534 fused_ordering(665) 00:14:13.534 fused_ordering(666) 00:14:13.534 fused_ordering(667) 00:14:13.534 fused_ordering(668) 00:14:13.534 fused_ordering(669) 00:14:13.534 fused_ordering(670) 00:14:13.534 fused_ordering(671) 00:14:13.534 fused_ordering(672) 00:14:13.534 fused_ordering(673) 00:14:13.534 fused_ordering(674) 00:14:13.534 fused_ordering(675) 00:14:13.534 fused_ordering(676) 00:14:13.534 fused_ordering(677) 00:14:13.534 fused_ordering(678) 00:14:13.534 fused_ordering(679) 00:14:13.534 fused_ordering(680) 00:14:13.534 fused_ordering(681) 00:14:13.534 fused_ordering(682) 00:14:13.534 fused_ordering(683) 00:14:13.534 fused_ordering(684) 00:14:13.534 fused_ordering(685) 00:14:13.534 fused_ordering(686) 00:14:13.534 fused_ordering(687) 00:14:13.534 fused_ordering(688) 00:14:13.534 fused_ordering(689) 00:14:13.534 fused_ordering(690) 00:14:13.535 fused_ordering(691) 00:14:13.535 fused_ordering(692) 00:14:13.535 fused_ordering(693) 00:14:13.535 fused_ordering(694) 00:14:13.535 fused_ordering(695) 00:14:13.535 fused_ordering(696) 00:14:13.535 fused_ordering(697) 00:14:13.535 fused_ordering(698) 00:14:13.535 fused_ordering(699) 00:14:13.535 fused_ordering(700) 00:14:13.535 fused_ordering(701) 00:14:13.535 fused_ordering(702) 00:14:13.535 fused_ordering(703) 00:14:13.535 fused_ordering(704) 00:14:13.535 fused_ordering(705) 00:14:13.535 fused_ordering(706) 00:14:13.535 fused_ordering(707) 00:14:13.535 fused_ordering(708) 00:14:13.535 fused_ordering(709) 00:14:13.535 fused_ordering(710) 00:14:13.535 fused_ordering(711) 00:14:13.535 fused_ordering(712) 00:14:13.535 fused_ordering(713) 00:14:13.535 fused_ordering(714) 00:14:13.535 fused_ordering(715) 00:14:13.535 fused_ordering(716) 00:14:13.535 fused_ordering(717) 00:14:13.535 fused_ordering(718) 00:14:13.535 fused_ordering(719) 00:14:13.535 fused_ordering(720) 00:14:13.535 fused_ordering(721) 00:14:13.535 fused_ordering(722) 00:14:13.535 fused_ordering(723) 00:14:13.535 fused_ordering(724) 00:14:13.535 fused_ordering(725) 00:14:13.535 fused_ordering(726) 00:14:13.535 fused_ordering(727) 00:14:13.535 fused_ordering(728) 00:14:13.535 fused_ordering(729) 00:14:13.535 fused_ordering(730) 00:14:13.535 fused_ordering(731) 00:14:13.535 fused_ordering(732) 00:14:13.535 fused_ordering(733) 00:14:13.535 fused_ordering(734) 00:14:13.535 fused_ordering(735) 00:14:13.535 fused_ordering(736) 00:14:13.535 fused_ordering(737) 00:14:13.535 fused_ordering(738) 00:14:13.535 fused_ordering(739) 00:14:13.535 fused_ordering(740) 00:14:13.535 fused_ordering(741) 00:14:13.535 fused_ordering(742) 00:14:13.535 fused_ordering(743) 00:14:13.535 fused_ordering(744) 00:14:13.535 fused_ordering(745) 00:14:13.535 fused_ordering(746) 00:14:13.535 fused_ordering(747) 00:14:13.535 fused_ordering(748) 00:14:13.535 fused_ordering(749) 00:14:13.535 fused_ordering(750) 00:14:13.535 fused_ordering(751) 00:14:13.535 fused_ordering(752) 00:14:13.535 fused_ordering(753) 00:14:13.535 fused_ordering(754) 00:14:13.535 fused_ordering(755) 00:14:13.535 fused_ordering(756) 00:14:13.535 fused_ordering(757) 00:14:13.535 fused_ordering(758) 00:14:13.535 fused_ordering(759) 00:14:13.535 fused_ordering(760) 00:14:13.535 fused_ordering(761) 00:14:13.535 fused_ordering(762) 00:14:13.535 fused_ordering(763) 00:14:13.535 fused_ordering(764) 00:14:13.535 fused_ordering(765) 00:14:13.535 fused_ordering(766) 00:14:13.535 fused_ordering(767) 00:14:13.535 fused_ordering(768) 00:14:13.535 fused_ordering(769) 00:14:13.535 fused_ordering(770) 00:14:13.535 fused_ordering(771) 00:14:13.535 fused_ordering(772) 00:14:13.535 fused_ordering(773) 00:14:13.535 fused_ordering(774) 00:14:13.535 fused_ordering(775) 00:14:13.535 fused_ordering(776) 00:14:13.535 fused_ordering(777) 00:14:13.535 fused_ordering(778) 00:14:13.535 fused_ordering(779) 00:14:13.535 fused_ordering(780) 00:14:13.535 fused_ordering(781) 00:14:13.535 fused_ordering(782) 00:14:13.535 fused_ordering(783) 00:14:13.535 fused_ordering(784) 00:14:13.535 fused_ordering(785) 00:14:13.535 fused_ordering(786) 00:14:13.535 fused_ordering(787) 00:14:13.535 fused_ordering(788) 00:14:13.535 fused_ordering(789) 00:14:13.535 fused_ordering(790) 00:14:13.535 fused_ordering(791) 00:14:13.535 fused_ordering(792) 00:14:13.535 fused_ordering(793) 00:14:13.535 fused_ordering(794) 00:14:13.535 fused_ordering(795) 00:14:13.535 fused_ordering(796) 00:14:13.535 fused_ordering(797) 00:14:13.535 fused_ordering(798) 00:14:13.535 fused_ordering(799) 00:14:13.535 fused_ordering(800) 00:14:13.535 fused_ordering(801) 00:14:13.535 fused_ordering(802) 00:14:13.535 fused_ordering(803) 00:14:13.535 fused_ordering(804) 00:14:13.535 fused_ordering(805) 00:14:13.535 fused_ordering(806) 00:14:13.535 fused_ordering(807) 00:14:13.535 fused_ordering(808) 00:14:13.535 fused_ordering(809) 00:14:13.535 fused_ordering(810) 00:14:13.535 fused_ordering(811) 00:14:13.535 fused_ordering(812) 00:14:13.535 fused_ordering(813) 00:14:13.535 fused_ordering(814) 00:14:13.535 fused_ordering(815) 00:14:13.535 fused_ordering(816) 00:14:13.535 fused_ordering(817) 00:14:13.535 fused_ordering(818) 00:14:13.535 fused_ordering(819) 00:14:13.535 fused_ordering(820) 00:14:14.100 fused_ordering(821) 00:14:14.100 fused_ordering(822) 00:14:14.100 fused_ordering(823) 00:14:14.100 fused_ordering(824) 00:14:14.100 fused_ordering(825) 00:14:14.100 fused_ordering(826) 00:14:14.100 fused_ordering(827) 00:14:14.100 fused_ordering(828) 00:14:14.100 fused_ordering(829) 00:14:14.100 fused_ordering(830) 00:14:14.100 fused_ordering(831) 00:14:14.100 fused_ordering(832) 00:14:14.100 fused_ordering(833) 00:14:14.100 fused_ordering(834) 00:14:14.100 fused_ordering(835) 00:14:14.100 fused_ordering(836) 00:14:14.100 fused_ordering(837) 00:14:14.100 fused_ordering(838) 00:14:14.100 fused_ordering(839) 00:14:14.100 fused_ordering(840) 00:14:14.100 fused_ordering(841) 00:14:14.100 fused_ordering(842) 00:14:14.100 fused_ordering(843) 00:14:14.100 fused_ordering(844) 00:14:14.100 fused_ordering(845) 00:14:14.100 fused_ordering(846) 00:14:14.100 fused_ordering(847) 00:14:14.100 fused_ordering(848) 00:14:14.100 fused_ordering(849) 00:14:14.100 fused_ordering(850) 00:14:14.100 fused_ordering(851) 00:14:14.100 fused_ordering(852) 00:14:14.100 fused_ordering(853) 00:14:14.100 fused_ordering(854) 00:14:14.100 fused_ordering(855) 00:14:14.100 fused_ordering(856) 00:14:14.100 fused_ordering(857) 00:14:14.100 fused_ordering(858) 00:14:14.100 fused_ordering(859) 00:14:14.100 fused_ordering(860) 00:14:14.100 fused_ordering(861) 00:14:14.100 fused_ordering(862) 00:14:14.100 fused_ordering(863) 00:14:14.100 fused_ordering(864) 00:14:14.100 fused_ordering(865) 00:14:14.100 fused_ordering(866) 00:14:14.100 fused_ordering(867) 00:14:14.100 fused_ordering(868) 00:14:14.100 fused_ordering(869) 00:14:14.100 fused_ordering(870) 00:14:14.100 fused_ordering(871) 00:14:14.100 fused_ordering(872) 00:14:14.100 fused_ordering(873) 00:14:14.100 fused_ordering(874) 00:14:14.100 fused_ordering(875) 00:14:14.100 fused_ordering(876) 00:14:14.100 fused_ordering(877) 00:14:14.100 fused_ordering(878) 00:14:14.100 fused_ordering(879) 00:14:14.100 fused_ordering(880) 00:14:14.100 fused_ordering(881) 00:14:14.100 fused_ordering(882) 00:14:14.100 fused_ordering(883) 00:14:14.100 fused_ordering(884) 00:14:14.100 fused_ordering(885) 00:14:14.100 fused_ordering(886) 00:14:14.100 fused_ordering(887) 00:14:14.100 fused_ordering(888) 00:14:14.100 fused_ordering(889) 00:14:14.100 fused_ordering(890) 00:14:14.100 fused_ordering(891) 00:14:14.100 fused_ordering(892) 00:14:14.100 fused_ordering(893) 00:14:14.100 fused_ordering(894) 00:14:14.100 fused_ordering(895) 00:14:14.100 fused_ordering(896) 00:14:14.100 fused_ordering(897) 00:14:14.100 fused_ordering(898) 00:14:14.100 fused_ordering(899) 00:14:14.100 fused_ordering(900) 00:14:14.100 fused_ordering(901) 00:14:14.100 fused_ordering(902) 00:14:14.100 fused_ordering(903) 00:14:14.100 fused_ordering(904) 00:14:14.100 fused_ordering(905) 00:14:14.100 fused_ordering(906) 00:14:14.100 fused_ordering(907) 00:14:14.100 fused_ordering(908) 00:14:14.100 fused_ordering(909) 00:14:14.100 fused_ordering(910) 00:14:14.100 fused_ordering(911) 00:14:14.100 fused_ordering(912) 00:14:14.100 fused_ordering(913) 00:14:14.100 fused_ordering(914) 00:14:14.100 fused_ordering(915) 00:14:14.100 fused_ordering(916) 00:14:14.100 fused_ordering(917) 00:14:14.100 fused_ordering(918) 00:14:14.100 fused_ordering(919) 00:14:14.100 fused_ordering(920) 00:14:14.100 fused_ordering(921) 00:14:14.100 fused_ordering(922) 00:14:14.100 fused_ordering(923) 00:14:14.100 fused_ordering(924) 00:14:14.100 fused_ordering(925) 00:14:14.100 fused_ordering(926) 00:14:14.100 fused_ordering(927) 00:14:14.100 fused_ordering(928) 00:14:14.100 fused_ordering(929) 00:14:14.100 fused_ordering(930) 00:14:14.100 fused_ordering(931) 00:14:14.100 fused_ordering(932) 00:14:14.100 fused_ordering(933) 00:14:14.100 fused_ordering(934) 00:14:14.100 fused_ordering(935) 00:14:14.100 fused_ordering(936) 00:14:14.100 fused_ordering(937) 00:14:14.100 fused_ordering(938) 00:14:14.100 fused_ordering(939) 00:14:14.100 fused_ordering(940) 00:14:14.100 fused_ordering(941) 00:14:14.100 fused_ordering(942) 00:14:14.100 fused_ordering(943) 00:14:14.100 fused_ordering(944) 00:14:14.100 fused_ordering(945) 00:14:14.100 fused_ordering(946) 00:14:14.100 fused_ordering(947) 00:14:14.100 fused_ordering(948) 00:14:14.100 fused_ordering(949) 00:14:14.100 fused_ordering(950) 00:14:14.100 fused_ordering(951) 00:14:14.100 fused_ordering(952) 00:14:14.100 fused_ordering(953) 00:14:14.100 fused_ordering(954) 00:14:14.100 fused_ordering(955) 00:14:14.100 fused_ordering(956) 00:14:14.100 fused_ordering(957) 00:14:14.100 fused_ordering(958) 00:14:14.100 fused_ordering(959) 00:14:14.100 fused_ordering(960) 00:14:14.100 fused_ordering(961) 00:14:14.100 fused_ordering(962) 00:14:14.100 fused_ordering(963) 00:14:14.100 fused_ordering(964) 00:14:14.100 fused_ordering(965) 00:14:14.100 fused_ordering(966) 00:14:14.100 fused_ordering(967) 00:14:14.100 fused_ordering(968) 00:14:14.100 fused_ordering(969) 00:14:14.100 fused_ordering(970) 00:14:14.100 fused_ordering(971) 00:14:14.100 fused_ordering(972) 00:14:14.100 fused_ordering(973) 00:14:14.100 fused_ordering(974) 00:14:14.100 fused_ordering(975) 00:14:14.100 fused_ordering(976) 00:14:14.100 fused_ordering(977) 00:14:14.100 fused_ordering(978) 00:14:14.100 fused_ordering(979) 00:14:14.100 fused_ordering(980) 00:14:14.100 fused_ordering(981) 00:14:14.100 fused_ordering(982) 00:14:14.100 fused_ordering(983) 00:14:14.100 fused_ordering(984) 00:14:14.100 fused_ordering(985) 00:14:14.100 fused_ordering(986) 00:14:14.100 fused_ordering(987) 00:14:14.100 fused_ordering(988) 00:14:14.100 fused_ordering(989) 00:14:14.100 fused_ordering(990) 00:14:14.100 fused_ordering(991) 00:14:14.100 fused_ordering(992) 00:14:14.100 fused_ordering(993) 00:14:14.100 fused_ordering(994) 00:14:14.100 fused_ordering(995) 00:14:14.100 fused_ordering(996) 00:14:14.100 fused_ordering(997) 00:14:14.100 fused_ordering(998) 00:14:14.100 fused_ordering(999) 00:14:14.100 fused_ordering(1000) 00:14:14.100 fused_ordering(1001) 00:14:14.100 fused_ordering(1002) 00:14:14.100 fused_ordering(1003) 00:14:14.100 fused_ordering(1004) 00:14:14.100 fused_ordering(1005) 00:14:14.100 fused_ordering(1006) 00:14:14.100 fused_ordering(1007) 00:14:14.100 fused_ordering(1008) 00:14:14.100 fused_ordering(1009) 00:14:14.101 fused_ordering(1010) 00:14:14.101 fused_ordering(1011) 00:14:14.101 fused_ordering(1012) 00:14:14.101 fused_ordering(1013) 00:14:14.101 fused_ordering(1014) 00:14:14.101 fused_ordering(1015) 00:14:14.101 fused_ordering(1016) 00:14:14.101 fused_ordering(1017) 00:14:14.101 fused_ordering(1018) 00:14:14.101 fused_ordering(1019) 00:14:14.101 fused_ordering(1020) 00:14:14.101 fused_ordering(1021) 00:14:14.101 fused_ordering(1022) 00:14:14.101 fused_ordering(1023) 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.101 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.101 rmmod nvme_tcp 00:14:14.359 rmmod nvme_fabrics 00:14:14.359 rmmod nvme_keyring 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3927508 ']' 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3927508 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3927508 ']' 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3927508 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3927508 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3927508' 00:14:14.359 killing process with pid 3927508 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3927508 00:14:14.359 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3927508 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.617 19:44:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.515 19:44:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:16.515 00:14:16.515 real 0m7.479s 00:14:16.515 user 0m5.076s 00:14:16.515 sys 0m3.227s 00:14:16.515 19:44:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:16.515 19:44:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.515 ************************************ 00:14:16.515 END TEST nvmf_fused_ordering 00:14:16.515 ************************************ 00:14:16.515 19:44:25 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:16.515 19:44:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:16.515 19:44:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.515 19:44:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:16.515 ************************************ 00:14:16.515 START TEST nvmf_delete_subsystem 00:14:16.515 ************************************ 00:14:16.515 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:16.515 * Looking for test storage... 00:14:16.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.515 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.515 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.773 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.774 19:44:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:18.672 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:18.672 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.672 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:18.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:18.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.673 19:44:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:14:18.673 00:14:18.673 --- 10.0.0.2 ping statistics --- 00:14:18.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.673 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:14:18.673 00:14:18.673 --- 10.0.0.1 ping statistics --- 00:14:18.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.673 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3929782 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3929782 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3929782 ']' 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:18.673 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.931 [2024-07-25 19:44:28.123650] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:18.931 [2024-07-25 19:44:28.123738] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.931 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.931 [2024-07-25 19:44:28.197491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:18.931 [2024-07-25 19:44:28.285124] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.931 [2024-07-25 19:44:28.285190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.931 [2024-07-25 19:44:28.285204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.931 [2024-07-25 19:44:28.285215] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.931 [2024-07-25 19:44:28.285225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.931 [2024-07-25 19:44:28.285286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.931 [2024-07-25 19:44:28.285290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 [2024-07-25 19:44:28.436255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 [2024-07-25 19:44:28.452525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 NULL1 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 Delay0 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3929867 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:19.189 19:44:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:19.189 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.189 [2024-07-25 19:44:28.527216] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:21.084 19:44:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.084 19:44:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.084 19:44:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 [2024-07-25 19:44:30.697470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4538000c00 is same with the state(5) to be set 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 starting I/O failed: -6 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Read completed with error (sct=0, sc=8) 00:14:21.341 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 starting I/O failed: -6 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 [2024-07-25 19:44:30.698331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494d40 is same with the state(5) to be set 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Write completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:21.342 Read completed with error (sct=0, sc=8) 00:14:22.274 [2024-07-25 19:44:31.665605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ac620 is same with the state(5) to be set 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 [2024-07-25 19:44:31.699458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f453800bfe0 is same with the state(5) to be set 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 [2024-07-25 19:44:31.699636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f453800c600 is same with the state(5) to be set 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 [2024-07-25 19:44:31.700133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fb00 is same with the state(5) to be set 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Read completed with error (sct=0, sc=8) 00:14:22.274 Write completed with error (sct=0, sc=8) 00:14:22.274 [2024-07-25 19:44:31.700331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fec0 is same with the state(5) to be set 00:14:22.274 Initializing NVMe Controllers 00:14:22.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.274 Controller IO queue size 128, less than required. 00:14:22.274 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:22.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:22.274 Initialization complete. Launching workers. 00:14:22.274 ======================================================== 00:14:22.274 Latency(us) 00:14:22.274 Device Information : IOPS MiB/s Average min max 00:14:22.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.20 0.09 881414.01 467.11 1012652.74 00:14:22.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.75 0.08 896555.42 764.52 1012652.08 00:14:22.274 ======================================================== 00:14:22.274 Total : 344.95 0.17 888821.32 467.11 1012652.74 00:14:22.274 00:14:22.274 [2024-07-25 19:44:31.701293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ac620 (9): Bad file descriptor 00:14:22.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:22.532 19:44:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.532 19:44:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:22.532 19:44:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3929867 00:14:22.532 19:44:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3929867 00:14:22.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3929867) - No such process 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3929867 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3929867 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3929867 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.790 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.056 [2024-07-25 19:44:32.225833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3930274 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:23.056 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:23.056 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.056 [2024-07-25 19:44:32.289108] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:23.346 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:23.346 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:23.346 19:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:23.921 19:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:23.921 19:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:23.921 19:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:24.484 19:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:24.484 19:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:24.484 19:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.048 19:44:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.048 19:44:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:25.048 19:44:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.613 19:44:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.613 19:44:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:25.613 19:44:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.870 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.870 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:25.870 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.126 Initializing NVMe Controllers 00:14:26.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.126 Controller IO queue size 128, less than required. 00:14:26.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:26.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:26.126 Initialization complete. Launching workers. 00:14:26.126 ======================================================== 00:14:26.127 Latency(us) 00:14:26.127 Device Information : IOPS MiB/s Average min max 00:14:26.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004024.53 1000231.76 1042446.68 00:14:26.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005038.83 1000243.19 1041389.74 00:14:26.127 ======================================================== 00:14:26.127 Total : 256.00 0.12 1004531.68 1000231.76 1042446.68 00:14:26.127 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3930274 00:14:26.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3930274) - No such process 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3930274 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.384 rmmod nvme_tcp 00:14:26.384 rmmod nvme_fabrics 00:14:26.384 rmmod nvme_keyring 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3929782 ']' 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3929782 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3929782 ']' 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3929782 00:14:26.384 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3929782 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3929782' 00:14:26.644 killing process with pid 3929782 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3929782 00:14:26.644 19:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3929782 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.644 19:44:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.179 19:44:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.179 00:14:29.179 real 0m12.219s 00:14:29.179 user 0m27.732s 00:14:29.179 sys 0m3.026s 00:14:29.179 19:44:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.179 19:44:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.179 ************************************ 00:14:29.179 END TEST nvmf_delete_subsystem 00:14:29.179 ************************************ 00:14:29.179 19:44:38 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:29.179 19:44:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:29.179 19:44:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.179 19:44:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.179 ************************************ 00:14:29.179 START TEST nvmf_ns_masking 00:14:29.179 ************************************ 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:29.179 * Looking for test storage... 00:14:29.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.179 19:44:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=4ec0a846-a172-4fa7-8453-18355bccbb76 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.180 19:44:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:31.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:31.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:31.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:31.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:14:31.081 00:14:31.081 --- 10.0.0.2 ping statistics --- 00:14:31.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.081 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:14:31.081 00:14:31.081 --- 10.0.0.1 ping statistics --- 00:14:31.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.081 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.081 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3932612 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3932612 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3932612 ']' 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:31.082 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.082 [2024-07-25 19:44:40.358663] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:31.082 [2024-07-25 19:44:40.358751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.082 [2024-07-25 19:44:40.428578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.339 [2024-07-25 19:44:40.524361] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.339 [2024-07-25 19:44:40.524423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.340 [2024-07-25 19:44:40.524439] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.340 [2024-07-25 19:44:40.524452] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.340 [2024-07-25 19:44:40.524463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.340 [2024-07-25 19:44:40.524844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.340 [2024-07-25 19:44:40.524877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.340 [2024-07-25 19:44:40.524946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.340 [2024-07-25 19:44:40.524949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.340 19:44:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:31.596 [2024-07-25 19:44:40.903540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.596 19:44:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:31.596 19:44:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:31.596 19:44:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.853 Malloc1 00:14:31.853 19:44:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:32.111 Malloc2 00:14:32.111 19:44:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.367 19:44:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:32.624 19:44:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.881 [2024-07-25 19:44:42.169774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ec0a846-a172-4fa7-8453-18355bccbb76 -a 10.0.0.2 -s 4420 -i 4 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:32.881 19:44:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:35.403 [ 0]:0x1 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=16c68cb7cec64856866628f123077448 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 16c68cb7cec64856866628f123077448 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:35.403 [ 0]:0x1 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:35.403 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=16c68cb7cec64856866628f123077448 00:14:35.404 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 16c68cb7cec64856866628f123077448 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.404 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:35.404 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:35.404 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:35.404 [ 1]:0x2 00:14:35.404 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.404 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:35.660 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:35.660 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.660 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:35.660 19:44:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.918 19:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.175 19:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:36.175 19:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:36.175 19:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ec0a846-a172-4fa7-8453-18355bccbb76 -a 10.0.0.2 -s 4420 -i 4 00:14:36.432 19:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:36.432 19:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:36.432 19:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.432 19:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:36.432 19:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:36.432 19:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:38.327 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.584 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:38.585 [ 0]:0x2 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.585 19:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:38.842 [ 0]:0x1 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=16c68cb7cec64856866628f123077448 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 16c68cb7cec64856866628f123077448 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:38.842 [ 1]:0x2 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.842 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:39.099 [ 0]:0x2 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:39.099 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.356 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ec0a846-a172-4fa7-8453-18355bccbb76 -a 10.0.0.2 -s 4420 -i 4 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:39.613 19:44:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:42.136 [ 0]:0x1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=16c68cb7cec64856866628f123077448 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 16c68cb7cec64856866628f123077448 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:42.136 [ 1]:0x2 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:42.136 [ 0]:0x2 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.136 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.137 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.137 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:42.137 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:42.422 [2024-07-25 19:44:51.676089] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:42.422 request: 00:14:42.422 { 00:14:42.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.422 "nsid": 2, 00:14:42.422 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.422 "method": "nvmf_ns_remove_host", 00:14:42.422 "req_id": 1 00:14:42.422 } 00:14:42.422 Got JSON-RPC error response 00:14:42.422 response: 00:14:42.422 { 00:14:42.422 "code": -32602, 00:14:42.422 "message": "Invalid parameters" 00:14:42.422 } 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:42.422 [ 0]:0x2 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.422 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.423 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7000c8ebdd3549019166937c4f73a104 00:14:42.423 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7000c8ebdd3549019166937c4f73a104 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.423 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:42.423 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.423 19:44:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.680 19:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.681 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.681 rmmod nvme_tcp 00:14:42.681 rmmod nvme_fabrics 00:14:42.938 rmmod nvme_keyring 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3932612 ']' 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3932612 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3932612 ']' 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3932612 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3932612 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3932612' 00:14:42.938 killing process with pid 3932612 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3932612 00:14:42.938 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3932612 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.197 19:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.112 19:44:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.112 00:14:45.112 real 0m16.321s 00:14:45.112 user 0m50.789s 00:14:45.112 sys 0m3.736s 00:14:45.112 19:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.112 19:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.112 ************************************ 00:14:45.112 END TEST nvmf_ns_masking 00:14:45.112 ************************************ 00:14:45.112 19:44:54 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:45.112 19:44:54 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:45.112 19:44:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:45.112 19:44:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.112 19:44:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.112 ************************************ 00:14:45.112 START TEST nvmf_nvme_cli 00:14:45.112 ************************************ 00:14:45.112 19:44:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:45.371 * Looking for test storage... 00:14:45.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:45.371 19:44:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:47.275 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:47.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:47.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:47.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:47.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:14:47.276 00:14:47.276 --- 10.0.0.2 ping statistics --- 00:14:47.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.276 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:14:47.276 00:14:47.276 --- 10.0.0.1 ping statistics --- 00:14:47.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.276 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3936165 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3936165 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3936165 ']' 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:47.276 19:44:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.535 [2024-07-25 19:44:56.723019] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:47.535 [2024-07-25 19:44:56.723118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.535 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.535 [2024-07-25 19:44:56.800814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.535 [2024-07-25 19:44:56.899696] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.535 [2024-07-25 19:44:56.899748] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.535 [2024-07-25 19:44:56.899787] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.535 [2024-07-25 19:44:56.899809] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.535 [2024-07-25 19:44:56.899827] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.535 [2024-07-25 19:44:56.899926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.535 [2024-07-25 19:44:56.899956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.535 [2024-07-25 19:44:56.900011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.535 [2024-07-25 19:44:56.900019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.793 [2024-07-25 19:44:57.079066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.793 Malloc0 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.793 Malloc1 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.793 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.794 [2024-07-25 19:44:57.164751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.794 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:48.051 00:14:48.051 Discovery Log Number of Records 2, Generation counter 2 00:14:48.051 =====Discovery Log Entry 0====== 00:14:48.051 trtype: tcp 00:14:48.051 adrfam: ipv4 00:14:48.051 subtype: current discovery subsystem 00:14:48.051 treq: not required 00:14:48.051 portid: 0 00:14:48.051 trsvcid: 4420 00:14:48.051 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:48.051 traddr: 10.0.0.2 00:14:48.051 eflags: explicit discovery connections, duplicate discovery information 00:14:48.051 sectype: none 00:14:48.051 =====Discovery Log Entry 1====== 00:14:48.051 trtype: tcp 00:14:48.051 adrfam: ipv4 00:14:48.051 subtype: nvme subsystem 00:14:48.051 treq: not required 00:14:48.051 portid: 0 00:14:48.051 trsvcid: 4420 00:14:48.051 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:48.051 traddr: 10.0.0.2 00:14:48.051 eflags: none 00:14:48.051 sectype: none 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:48.051 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.618 19:44:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:48.618 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:48.618 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.618 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:48.618 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:48.618 19:44:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:50.514 /dev/nvme0n1 ]] 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:50.514 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:50.515 19:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.772 rmmod nvme_tcp 00:14:50.772 rmmod nvme_fabrics 00:14:50.772 rmmod nvme_keyring 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3936165 ']' 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3936165 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3936165 ']' 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3936165 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3936165 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3936165' 00:14:50.772 killing process with pid 3936165 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3936165 00:14:50.772 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3936165 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.031 19:45:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.563 19:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.563 00:14:53.563 real 0m7.922s 00:14:53.563 user 0m14.436s 00:14:53.563 sys 0m2.170s 00:14:53.563 19:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.563 19:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 ************************************ 00:14:53.563 END TEST nvmf_nvme_cli 00:14:53.563 ************************************ 00:14:53.563 19:45:02 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:53.563 19:45:02 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:53.563 19:45:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:53.563 19:45:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:53.563 19:45:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 ************************************ 00:14:53.563 START TEST nvmf_vfio_user 00:14:53.563 ************************************ 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:53.563 * Looking for test storage... 00:14:53.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3937082 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3937082' 00:14:53.563 Process pid: 3937082 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3937082 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3937082 ']' 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.563 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.564 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:53.564 [2024-07-25 19:45:02.609676] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:53.564 [2024-07-25 19:45:02.609753] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.564 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.564 [2024-07-25 19:45:02.680693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.564 [2024-07-25 19:45:02.773066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.564 [2024-07-25 19:45:02.773142] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.564 [2024-07-25 19:45:02.773169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.564 [2024-07-25 19:45:02.773185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.564 [2024-07-25 19:45:02.773197] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.564 [2024-07-25 19:45:02.773263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.564 [2024-07-25 19:45:02.773294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.564 [2024-07-25 19:45:02.773360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.564 [2024-07-25 19:45:02.773363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.564 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.564 19:45:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:53.564 19:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:54.495 19:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:54.752 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:54.752 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:54.752 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.752 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:54.752 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:55.009 Malloc1 00:14:55.009 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:55.266 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:55.523 19:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:55.780 19:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.780 19:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:55.780 19:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:56.038 Malloc2 00:14:56.038 19:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:56.295 19:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:56.559 19:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:56.817 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:56.817 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:56.817 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.817 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.817 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.817 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:56.817 [2024-07-25 19:45:06.222996] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:56.817 [2024-07-25 19:45:06.223049] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937505 ] 00:14:56.817 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.076 [2024-07-25 19:45:06.257584] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:57.076 [2024-07-25 19:45:06.266175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.076 [2024-07-25 19:45:06.266203] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2453cd3000 00:14:57.076 [2024-07-25 19:45:06.267169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.268161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.269171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.270177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.271184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.272186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.273193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.274198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.076 [2024-07-25 19:45:06.275205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.076 [2024-07-25 19:45:06.275225] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2452a85000 00:14:57.076 [2024-07-25 19:45:06.276344] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.077 [2024-07-25 19:45:06.289708] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:57.077 [2024-07-25 19:45:06.289742] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:57.077 [2024-07-25 19:45:06.298326] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:57.077 [2024-07-25 19:45:06.298397] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:57.077 [2024-07-25 19:45:06.298486] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:57.077 [2024-07-25 19:45:06.298514] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:57.077 [2024-07-25 19:45:06.298524] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:57.077 [2024-07-25 19:45:06.299317] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:57.077 [2024-07-25 19:45:06.299356] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:57.077 [2024-07-25 19:45:06.299371] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:57.077 [2024-07-25 19:45:06.300321] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:57.077 [2024-07-25 19:45:06.300354] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:57.077 [2024-07-25 19:45:06.300369] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.077 [2024-07-25 19:45:06.301324] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:57.077 [2024-07-25 19:45:06.301342] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.077 [2024-07-25 19:45:06.302331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:57.077 [2024-07-25 19:45:06.302364] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:57.077 [2024-07-25 19:45:06.302373] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:57.077 [2024-07-25 19:45:06.302385] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.077 [2024-07-25 19:45:06.302494] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:57.077 [2024-07-25 19:45:06.302502] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.077 [2024-07-25 19:45:06.302510] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:57.077 [2024-07-25 19:45:06.303345] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:57.077 [2024-07-25 19:45:06.304341] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:57.077 [2024-07-25 19:45:06.305347] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:57.077 [2024-07-25 19:45:06.306360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.077 [2024-07-25 19:45:06.306497] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.077 [2024-07-25 19:45:06.307359] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:57.077 [2024-07-25 19:45:06.307392] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.077 [2024-07-25 19:45:06.307401] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307426] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:57.077 [2024-07-25 19:45:06.307440] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307468] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.077 [2024-07-25 19:45:06.307478] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.077 [2024-07-25 19:45:06.307497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.077 [2024-07-25 19:45:06.307576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:57.077 [2024-07-25 19:45:06.307595] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:57.077 [2024-07-25 19:45:06.307604] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:57.077 [2024-07-25 19:45:06.307611] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:57.077 [2024-07-25 19:45:06.307619] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:57.077 [2024-07-25 19:45:06.307626] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:57.077 [2024-07-25 19:45:06.307634] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:57.077 [2024-07-25 19:45:06.307641] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307653] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:57.077 [2024-07-25 19:45:06.307686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:57.077 [2024-07-25 19:45:06.307703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.077 [2024-07-25 19:45:06.307720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.077 [2024-07-25 19:45:06.307732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.077 [2024-07-25 19:45:06.307744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.077 [2024-07-25 19:45:06.307752] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307769] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:57.077 [2024-07-25 19:45:06.307799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:57.077 [2024-07-25 19:45:06.307809] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:57.077 [2024-07-25 19:45:06.307817] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307828] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307842] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.077 [2024-07-25 19:45:06.307867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:57.077 [2024-07-25 19:45:06.307932] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.307960] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:57.077 [2024-07-25 19:45:06.307969] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:57.077 [2024-07-25 19:45:06.307978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:57.077 [2024-07-25 19:45:06.307990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:57.077 [2024-07-25 19:45:06.308004] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:57.077 [2024-07-25 19:45:06.308023] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.308051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.308071] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.077 [2024-07-25 19:45:06.308080] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.077 [2024-07-25 19:45:06.308090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.077 [2024-07-25 19:45:06.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:57.077 [2024-07-25 19:45:06.308146] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.308161] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:57.077 [2024-07-25 19:45:06.308173] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.077 [2024-07-25 19:45:06.308181] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.077 [2024-07-25 19:45:06.308191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308220] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:57.078 [2024-07-25 19:45:06.308232] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:57.078 [2024-07-25 19:45:06.308245] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:57.078 [2024-07-25 19:45:06.308256] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:57.078 [2024-07-25 19:45:06.308264] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:57.078 [2024-07-25 19:45:06.308273] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:57.078 [2024-07-25 19:45:06.308281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:57.078 [2024-07-25 19:45:06.308290] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:57.078 [2024-07-25 19:45:06.308320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308465] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:57.078 [2024-07-25 19:45:06.308474] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:57.078 [2024-07-25 19:45:06.308480] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:57.078 [2024-07-25 19:45:06.308486] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:57.078 [2024-07-25 19:45:06.308500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:57.078 [2024-07-25 19:45:06.308512] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:57.078 [2024-07-25 19:45:06.308520] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:57.078 [2024-07-25 19:45:06.308529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308540] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:57.078 [2024-07-25 19:45:06.308547] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.078 [2024-07-25 19:45:06.308556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308568] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:57.078 [2024-07-25 19:45:06.308576] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:57.078 [2024-07-25 19:45:06.308585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:57.078 [2024-07-25 19:45:06.308596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:57.078 [2024-07-25 19:45:06.308645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:57.078 ===================================================== 00:14:57.078 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.078 ===================================================== 00:14:57.078 Controller Capabilities/Features 00:14:57.078 ================================ 00:14:57.078 Vendor ID: 4e58 00:14:57.078 Subsystem Vendor ID: 4e58 00:14:57.078 Serial Number: SPDK1 00:14:57.078 Model Number: SPDK bdev Controller 00:14:57.078 Firmware Version: 24.05.1 00:14:57.078 Recommended Arb Burst: 6 00:14:57.078 IEEE OUI Identifier: 8d 6b 50 00:14:57.078 Multi-path I/O 00:14:57.078 May have multiple subsystem ports: Yes 00:14:57.078 May have multiple controllers: Yes 00:14:57.078 Associated with SR-IOV VF: No 00:14:57.078 Max Data Transfer Size: 131072 00:14:57.078 Max Number of Namespaces: 32 00:14:57.078 Max Number of I/O Queues: 127 00:14:57.078 NVMe Specification Version (VS): 1.3 00:14:57.078 NVMe Specification Version (Identify): 1.3 00:14:57.078 Maximum Queue Entries: 256 00:14:57.078 Contiguous Queues Required: Yes 00:14:57.078 Arbitration Mechanisms Supported 00:14:57.078 Weighted Round Robin: Not Supported 00:14:57.078 Vendor Specific: Not Supported 00:14:57.078 Reset Timeout: 15000 ms 00:14:57.078 Doorbell Stride: 4 bytes 00:14:57.078 NVM Subsystem Reset: Not Supported 00:14:57.078 Command Sets Supported 00:14:57.078 NVM Command Set: Supported 00:14:57.078 Boot Partition: Not Supported 00:14:57.078 Memory Page Size Minimum: 4096 bytes 00:14:57.078 Memory Page Size Maximum: 4096 bytes 00:14:57.078 Persistent Memory Region: Not Supported 00:14:57.078 Optional Asynchronous Events Supported 00:14:57.078 Namespace Attribute Notices: Supported 00:14:57.078 Firmware Activation Notices: Not Supported 00:14:57.078 ANA Change Notices: Not Supported 00:14:57.078 PLE Aggregate Log Change Notices: Not Supported 00:14:57.078 LBA Status Info Alert Notices: Not Supported 00:14:57.078 EGE Aggregate Log Change Notices: Not Supported 00:14:57.078 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.078 Zone Descriptor Change Notices: Not Supported 00:14:57.078 Discovery Log Change Notices: Not Supported 00:14:57.078 Controller Attributes 00:14:57.078 128-bit Host Identifier: Supported 00:14:57.078 Non-Operational Permissive Mode: Not Supported 00:14:57.078 NVM Sets: Not Supported 00:14:57.078 Read Recovery Levels: Not Supported 00:14:57.078 Endurance Groups: Not Supported 00:14:57.078 Predictable Latency Mode: Not Supported 00:14:57.078 Traffic Based Keep ALive: Not Supported 00:14:57.078 Namespace Granularity: Not Supported 00:14:57.078 SQ Associations: Not Supported 00:14:57.078 UUID List: Not Supported 00:14:57.078 Multi-Domain Subsystem: Not Supported 00:14:57.078 Fixed Capacity Management: Not Supported 00:14:57.078 Variable Capacity Management: Not Supported 00:14:57.078 Delete Endurance Group: Not Supported 00:14:57.078 Delete NVM Set: Not Supported 00:14:57.078 Extended LBA Formats Supported: Not Supported 00:14:57.078 Flexible Data Placement Supported: Not Supported 00:14:57.078 00:14:57.078 Controller Memory Buffer Support 00:14:57.078 ================================ 00:14:57.078 Supported: No 00:14:57.078 00:14:57.078 Persistent Memory Region Support 00:14:57.078 ================================ 00:14:57.078 Supported: No 00:14:57.078 00:14:57.078 Admin Command Set Attributes 00:14:57.078 ============================ 00:14:57.078 Security Send/Receive: Not Supported 00:14:57.078 Format NVM: Not Supported 00:14:57.078 Firmware Activate/Download: Not Supported 00:14:57.078 Namespace Management: Not Supported 00:14:57.078 Device Self-Test: Not Supported 00:14:57.078 Directives: Not Supported 00:14:57.078 NVMe-MI: Not Supported 00:14:57.078 Virtualization Management: Not Supported 00:14:57.078 Doorbell Buffer Config: Not Supported 00:14:57.078 Get LBA Status Capability: Not Supported 00:14:57.078 Command & Feature Lockdown Capability: Not Supported 00:14:57.078 Abort Command Limit: 4 00:14:57.078 Async Event Request Limit: 4 00:14:57.078 Number of Firmware Slots: N/A 00:14:57.078 Firmware Slot 1 Read-Only: N/A 00:14:57.078 Firmware Activation Without Reset: N/A 00:14:57.078 Multiple Update Detection Support: N/A 00:14:57.078 Firmware Update Granularity: No Information Provided 00:14:57.078 Per-Namespace SMART Log: No 00:14:57.078 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.078 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:57.078 Command Effects Log Page: Supported 00:14:57.078 Get Log Page Extended Data: Supported 00:14:57.078 Telemetry Log Pages: Not Supported 00:14:57.078 Persistent Event Log Pages: Not Supported 00:14:57.078 Supported Log Pages Log Page: May Support 00:14:57.078 Commands Supported & Effects Log Page: Not Supported 00:14:57.078 Feature Identifiers & Effects Log Page:May Support 00:14:57.078 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.078 Data Area 4 for Telemetry Log: Not Supported 00:14:57.078 Error Log Page Entries Supported: 128 00:14:57.078 Keep Alive: Supported 00:14:57.078 Keep Alive Granularity: 10000 ms 00:14:57.078 00:14:57.078 NVM Command Set Attributes 00:14:57.079 ========================== 00:14:57.079 Submission Queue Entry Size 00:14:57.079 Max: 64 00:14:57.079 Min: 64 00:14:57.079 Completion Queue Entry Size 00:14:57.079 Max: 16 00:14:57.079 Min: 16 00:14:57.079 Number of Namespaces: 32 00:14:57.079 Compare Command: Supported 00:14:57.079 Write Uncorrectable Command: Not Supported 00:14:57.079 Dataset Management Command: Supported 00:14:57.079 Write Zeroes Command: Supported 00:14:57.079 Set Features Save Field: Not Supported 00:14:57.079 Reservations: Not Supported 00:14:57.079 Timestamp: Not Supported 00:14:57.079 Copy: Supported 00:14:57.079 Volatile Write Cache: Present 00:14:57.079 Atomic Write Unit (Normal): 1 00:14:57.079 Atomic Write Unit (PFail): 1 00:14:57.079 Atomic Compare & Write Unit: 1 00:14:57.079 Fused Compare & Write: Supported 00:14:57.079 Scatter-Gather List 00:14:57.079 SGL Command Set: Supported (Dword aligned) 00:14:57.079 SGL Keyed: Not Supported 00:14:57.079 SGL Bit Bucket Descriptor: Not Supported 00:14:57.079 SGL Metadata Pointer: Not Supported 00:14:57.079 Oversized SGL: Not Supported 00:14:57.079 SGL Metadata Address: Not Supported 00:14:57.079 SGL Offset: Not Supported 00:14:57.079 Transport SGL Data Block: Not Supported 00:14:57.079 Replay Protected Memory Block: Not Supported 00:14:57.079 00:14:57.079 Firmware Slot Information 00:14:57.079 ========================= 00:14:57.079 Active slot: 1 00:14:57.079 Slot 1 Firmware Revision: 24.05.1 00:14:57.079 00:14:57.079 00:14:57.079 Commands Supported and Effects 00:14:57.079 ============================== 00:14:57.079 Admin Commands 00:14:57.079 -------------- 00:14:57.079 Get Log Page (02h): Supported 00:14:57.079 Identify (06h): Supported 00:14:57.079 Abort (08h): Supported 00:14:57.079 Set Features (09h): Supported 00:14:57.079 Get Features (0Ah): Supported 00:14:57.079 Asynchronous Event Request (0Ch): Supported 00:14:57.079 Keep Alive (18h): Supported 00:14:57.079 I/O Commands 00:14:57.079 ------------ 00:14:57.079 Flush (00h): Supported LBA-Change 00:14:57.079 Write (01h): Supported LBA-Change 00:14:57.079 Read (02h): Supported 00:14:57.079 Compare (05h): Supported 00:14:57.079 Write Zeroes (08h): Supported LBA-Change 00:14:57.079 Dataset Management (09h): Supported LBA-Change 00:14:57.079 Copy (19h): Supported LBA-Change 00:14:57.079 Unknown (79h): Supported LBA-Change 00:14:57.079 Unknown (7Ah): Supported 00:14:57.079 00:14:57.079 Error Log 00:14:57.079 ========= 00:14:57.079 00:14:57.079 Arbitration 00:14:57.079 =========== 00:14:57.079 Arbitration Burst: 1 00:14:57.079 00:14:57.079 Power Management 00:14:57.079 ================ 00:14:57.079 Number of Power States: 1 00:14:57.079 Current Power State: Power State #0 00:14:57.079 Power State #0: 00:14:57.079 Max Power: 0.00 W 00:14:57.079 Non-Operational State: Operational 00:14:57.079 Entry Latency: Not Reported 00:14:57.079 Exit Latency: Not Reported 00:14:57.079 Relative Read Throughput: 0 00:14:57.079 Relative Read Latency: 0 00:14:57.079 Relative Write Throughput: 0 00:14:57.079 Relative Write Latency: 0 00:14:57.079 Idle Power: Not Reported 00:14:57.079 Active Power: Not Reported 00:14:57.079 Non-Operational Permissive Mode: Not Supported 00:14:57.079 00:14:57.079 Health Information 00:14:57.079 ================== 00:14:57.079 Critical Warnings: 00:14:57.079 Available Spare Space: OK 00:14:57.079 Temperature: OK 00:14:57.079 Device Reliability: OK 00:14:57.079 Read Only: No 00:14:57.079 Volatile Memory Backup: OK 00:14:57.079 Current Temperature: 0 Kelvin[2024-07-25 19:45:06.308766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:57.079 [2024-07-25 19:45:06.308783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:57.079 [2024-07-25 19:45:06.308819] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:57.079 [2024-07-25 19:45:06.308835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.079 [2024-07-25 19:45:06.308846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.079 [2024-07-25 19:45:06.308856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.079 [2024-07-25 19:45:06.308866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.079 [2024-07-25 19:45:06.309381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:57.079 [2024-07-25 19:45:06.309402] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:57.079 [2024-07-25 19:45:06.310373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.079 [2024-07-25 19:45:06.310476] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:57.079 [2024-07-25 19:45:06.310490] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:57.079 [2024-07-25 19:45:06.311391] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:57.079 [2024-07-25 19:45:06.311417] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:57.079 [2024-07-25 19:45:06.311471] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:57.079 [2024-07-25 19:45:06.315098] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.079 (-273 Celsius) 00:14:57.079 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:57.079 Available Spare: 0% 00:14:57.079 Available Spare Threshold: 0% 00:14:57.079 Life Percentage Used: 0% 00:14:57.079 Data Units Read: 0 00:14:57.079 Data Units Written: 0 00:14:57.079 Host Read Commands: 0 00:14:57.079 Host Write Commands: 0 00:14:57.079 Controller Busy Time: 0 minutes 00:14:57.079 Power Cycles: 0 00:14:57.079 Power On Hours: 0 hours 00:14:57.079 Unsafe Shutdowns: 0 00:14:57.079 Unrecoverable Media Errors: 0 00:14:57.079 Lifetime Error Log Entries: 0 00:14:57.079 Warning Temperature Time: 0 minutes 00:14:57.079 Critical Temperature Time: 0 minutes 00:14:57.079 00:14:57.079 Number of Queues 00:14:57.079 ================ 00:14:57.079 Number of I/O Submission Queues: 127 00:14:57.079 Number of I/O Completion Queues: 127 00:14:57.079 00:14:57.079 Active Namespaces 00:14:57.079 ================= 00:14:57.079 Namespace ID:1 00:14:57.079 Error Recovery Timeout: Unlimited 00:14:57.079 Command Set Identifier: NVM (00h) 00:14:57.079 Deallocate: Supported 00:14:57.079 Deallocated/Unwritten Error: Not Supported 00:14:57.079 Deallocated Read Value: Unknown 00:14:57.079 Deallocate in Write Zeroes: Not Supported 00:14:57.079 Deallocated Guard Field: 0xFFFF 00:14:57.079 Flush: Supported 00:14:57.079 Reservation: Supported 00:14:57.079 Namespace Sharing Capabilities: Multiple Controllers 00:14:57.079 Size (in LBAs): 131072 (0GiB) 00:14:57.079 Capacity (in LBAs): 131072 (0GiB) 00:14:57.079 Utilization (in LBAs): 131072 (0GiB) 00:14:57.079 NGUID: 0A64A9325588468B890EC0FC0CB439AF 00:14:57.079 UUID: 0a64a932-5588-468b-890e-c0fc0cb439af 00:14:57.079 Thin Provisioning: Not Supported 00:14:57.079 Per-NS Atomic Units: Yes 00:14:57.079 Atomic Boundary Size (Normal): 0 00:14:57.079 Atomic Boundary Size (PFail): 0 00:14:57.079 Atomic Boundary Offset: 0 00:14:57.079 Maximum Single Source Range Length: 65535 00:14:57.079 Maximum Copy Length: 65535 00:14:57.079 Maximum Source Range Count: 1 00:14:57.079 NGUID/EUI64 Never Reused: No 00:14:57.079 Namespace Write Protected: No 00:14:57.079 Number of LBA Formats: 1 00:14:57.079 Current LBA Format: LBA Format #00 00:14:57.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:57.079 00:14:57.079 19:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:57.079 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.337 [2024-07-25 19:45:06.544903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.597 Initializing NVMe Controllers 00:15:02.597 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.597 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.597 Initialization complete. Launching workers. 00:15:02.597 ======================================================== 00:15:02.597 Latency(us) 00:15:02.597 Device Information : IOPS MiB/s Average min max 00:15:02.597 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35348.00 138.08 3621.82 1168.17 7503.68 00:15:02.597 ======================================================== 00:15:02.597 Total : 35348.00 138.08 3621.82 1168.17 7503.68 00:15:02.597 00:15:02.597 [2024-07-25 19:45:11.571306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.597 19:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:02.597 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.597 [2024-07-25 19:45:11.803462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.884 Initializing NVMe Controllers 00:15:07.884 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.884 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.884 Initialization complete. Launching workers. 00:15:07.884 ======================================================== 00:15:07.884 Latency(us) 00:15:07.884 Device Information : IOPS MiB/s Average min max 00:15:07.884 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15961.30 62.35 8024.65 4960.84 15964.35 00:15:07.884 ======================================================== 00:15:07.884 Total : 15961.30 62.35 8024.65 4960.84 15964.35 00:15:07.884 00:15:07.884 [2024-07-25 19:45:16.839642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.884 19:45:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:07.884 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.884 [2024-07-25 19:45:17.050701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.143 [2024-07-25 19:45:22.118414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.143 Initializing NVMe Controllers 00:15:13.143 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.143 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:13.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:13.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:13.143 Initialization complete. Launching workers. 00:15:13.143 Starting thread on core 2 00:15:13.143 Starting thread on core 3 00:15:13.143 Starting thread on core 1 00:15:13.143 19:45:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:13.143 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.143 [2024-07-25 19:45:22.422508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.422 [2024-07-25 19:45:25.488572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.422 Initializing NVMe Controllers 00:15:16.422 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.422 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:16.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:16.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:16.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:16.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:16.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:16.422 Initialization complete. Launching workers. 00:15:16.422 Starting thread on core 1 with urgent priority queue 00:15:16.422 Starting thread on core 2 with urgent priority queue 00:15:16.422 Starting thread on core 3 with urgent priority queue 00:15:16.422 Starting thread on core 0 with urgent priority queue 00:15:16.422 SPDK bdev Controller (SPDK1 ) core 0: 7642.33 IO/s 13.09 secs/100000 ios 00:15:16.422 SPDK bdev Controller (SPDK1 ) core 1: 7340.33 IO/s 13.62 secs/100000 ios 00:15:16.422 SPDK bdev Controller (SPDK1 ) core 2: 7339.67 IO/s 13.62 secs/100000 ios 00:15:16.423 SPDK bdev Controller (SPDK1 ) core 3: 7702.00 IO/s 12.98 secs/100000 ios 00:15:16.423 ======================================================== 00:15:16.423 00:15:16.423 19:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:16.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.423 [2024-07-25 19:45:25.787553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.423 Initializing NVMe Controllers 00:15:16.423 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.423 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.423 Namespace ID: 1 size: 0GB 00:15:16.423 Initialization complete. 00:15:16.423 INFO: using host memory buffer for IO 00:15:16.423 Hello world! 00:15:16.423 [2024-07-25 19:45:25.821038] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.678 19:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:16.678 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.934 [2024-07-25 19:45:26.120534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.866 Initializing NVMe Controllers 00:15:17.866 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.866 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.866 Initialization complete. Launching workers. 00:15:17.866 submit (in ns) avg, min, max = 9293.1, 3501.1, 6993496.7 00:15:17.866 complete (in ns) avg, min, max = 23420.7, 2058.9, 6993587.8 00:15:17.866 00:15:17.866 Submit histogram 00:15:17.866 ================ 00:15:17.866 Range in us Cumulative Count 00:15:17.866 3.484 - 3.508: 0.0221% ( 3) 00:15:17.866 3.508 - 3.532: 0.5389% ( 70) 00:15:17.866 3.532 - 3.556: 1.8826% ( 182) 00:15:17.866 3.556 - 3.579: 5.5519% ( 497) 00:15:17.866 3.579 - 3.603: 11.7460% ( 839) 00:15:17.866 3.603 - 3.627: 21.4987% ( 1321) 00:15:17.866 3.627 - 3.650: 32.9642% ( 1553) 00:15:17.866 3.650 - 3.674: 42.8276% ( 1336) 00:15:17.866 3.674 - 3.698: 50.0775% ( 982) 00:15:17.866 3.698 - 3.721: 56.3381% ( 848) 00:15:17.867 3.721 - 3.745: 61.4249% ( 689) 00:15:17.867 3.745 - 3.769: 66.0687% ( 629) 00:15:17.867 3.769 - 3.793: 69.6050% ( 479) 00:15:17.867 3.793 - 3.816: 72.3736% ( 375) 00:15:17.867 3.816 - 3.840: 75.2381% ( 388) 00:15:17.867 3.840 - 3.864: 78.7523% ( 476) 00:15:17.867 3.864 - 3.887: 82.5102% ( 509) 00:15:17.867 3.887 - 3.911: 85.4633% ( 400) 00:15:17.867 3.911 - 3.935: 87.5526% ( 283) 00:15:17.867 3.935 - 3.959: 89.2285% ( 227) 00:15:17.867 3.959 - 3.982: 91.0299% ( 244) 00:15:17.867 3.982 - 4.006: 92.7796% ( 237) 00:15:17.867 4.006 - 4.030: 93.8797% ( 149) 00:15:17.867 4.030 - 4.053: 94.7656% ( 120) 00:15:17.867 4.053 - 4.077: 95.5039% ( 100) 00:15:17.867 4.077 - 4.101: 96.0871% ( 79) 00:15:17.867 4.101 - 4.124: 96.4710% ( 52) 00:15:17.867 4.124 - 4.148: 96.6999% ( 31) 00:15:17.867 4.148 - 4.172: 96.8254% ( 17) 00:15:17.867 4.172 - 4.196: 96.9140% ( 12) 00:15:17.867 4.196 - 4.219: 97.0026% ( 12) 00:15:17.867 4.219 - 4.243: 97.0912% ( 12) 00:15:17.867 4.243 - 4.267: 97.2019% ( 15) 00:15:17.867 4.267 - 4.290: 97.3053% ( 14) 00:15:17.867 4.290 - 4.314: 97.3791% ( 10) 00:15:17.867 4.314 - 4.338: 97.4382% ( 8) 00:15:17.867 4.338 - 4.361: 97.4751% ( 5) 00:15:17.867 4.361 - 4.385: 97.5046% ( 4) 00:15:17.867 4.385 - 4.409: 97.5194% ( 2) 00:15:17.867 4.409 - 4.433: 97.5415% ( 3) 00:15:17.867 4.456 - 4.480: 97.5563% ( 2) 00:15:17.867 4.504 - 4.527: 97.5637% ( 1) 00:15:17.867 4.527 - 4.551: 97.5858% ( 3) 00:15:17.867 4.575 - 4.599: 97.6080% ( 3) 00:15:17.867 4.599 - 4.622: 97.6227% ( 2) 00:15:17.867 4.622 - 4.646: 97.6301% ( 1) 00:15:17.867 4.646 - 4.670: 97.6597% ( 4) 00:15:17.867 4.670 - 4.693: 97.6744% ( 2) 00:15:17.867 4.693 - 4.717: 97.7187% ( 6) 00:15:17.867 4.717 - 4.741: 97.7778% ( 8) 00:15:17.867 4.741 - 4.764: 97.8885% ( 15) 00:15:17.867 4.764 - 4.788: 97.9328% ( 6) 00:15:17.867 4.788 - 4.812: 97.9771% ( 6) 00:15:17.867 4.812 - 4.836: 98.0436% ( 9) 00:15:17.867 4.836 - 4.859: 98.0509% ( 1) 00:15:17.867 4.859 - 4.883: 98.0657% ( 2) 00:15:17.867 4.883 - 4.907: 98.1026% ( 5) 00:15:17.867 4.907 - 4.930: 98.1469% ( 6) 00:15:17.867 4.930 - 4.954: 98.1986% ( 7) 00:15:17.867 4.954 - 4.978: 98.2281% ( 4) 00:15:17.867 4.978 - 5.001: 98.2650% ( 5) 00:15:17.867 5.001 - 5.025: 98.2798% ( 2) 00:15:17.867 5.025 - 5.049: 98.2872% ( 1) 00:15:17.867 5.049 - 5.073: 98.3020% ( 2) 00:15:17.867 5.073 - 5.096: 98.3093% ( 1) 00:15:17.867 5.096 - 5.120: 98.3167% ( 1) 00:15:17.867 5.120 - 5.144: 98.3315% ( 2) 00:15:17.867 5.167 - 5.191: 98.3536% ( 3) 00:15:17.867 5.191 - 5.215: 98.3610% ( 1) 00:15:17.867 5.215 - 5.239: 98.3684% ( 1) 00:15:17.867 5.262 - 5.286: 98.3758% ( 1) 00:15:17.867 5.357 - 5.381: 98.3832% ( 1) 00:15:17.867 6.163 - 6.210: 98.3906% ( 1) 00:15:17.867 6.210 - 6.258: 98.3979% ( 1) 00:15:17.867 6.684 - 6.732: 98.4053% ( 1) 00:15:17.867 6.732 - 6.779: 98.4127% ( 1) 00:15:17.867 6.779 - 6.827: 98.4201% ( 1) 00:15:17.867 6.827 - 6.874: 98.4275% ( 1) 00:15:17.867 6.874 - 6.921: 98.4422% ( 2) 00:15:17.867 7.064 - 7.111: 98.4570% ( 2) 00:15:17.867 7.111 - 7.159: 98.4718% ( 2) 00:15:17.867 7.159 - 7.206: 98.4865% ( 2) 00:15:17.867 7.253 - 7.301: 98.4939% ( 1) 00:15:17.867 7.348 - 7.396: 98.5013% ( 1) 00:15:17.867 7.396 - 7.443: 98.5161% ( 2) 00:15:17.867 7.490 - 7.538: 98.5382% ( 3) 00:15:17.867 7.633 - 7.680: 98.5530% ( 2) 00:15:17.867 7.680 - 7.727: 98.5604% ( 1) 00:15:17.867 7.727 - 7.775: 98.5751% ( 2) 00:15:17.867 7.775 - 7.822: 98.5825% ( 1) 00:15:17.867 7.822 - 7.870: 98.5899% ( 1) 00:15:17.867 7.917 - 7.964: 98.5973% ( 1) 00:15:17.867 7.964 - 8.012: 98.6047% ( 1) 00:15:17.867 8.012 - 8.059: 98.6120% ( 1) 00:15:17.867 8.107 - 8.154: 98.6342% ( 3) 00:15:17.867 8.201 - 8.249: 98.6416% ( 1) 00:15:17.867 8.249 - 8.296: 98.6637% ( 3) 00:15:17.867 8.296 - 8.344: 98.6711% ( 1) 00:15:17.867 8.344 - 8.391: 98.6785% ( 1) 00:15:17.867 8.439 - 8.486: 98.6859% ( 1) 00:15:17.867 8.486 - 8.533: 98.6932% ( 1) 00:15:17.867 8.676 - 8.723: 98.7006% ( 1) 00:15:17.867 8.865 - 8.913: 98.7154% ( 2) 00:15:17.867 8.913 - 8.960: 98.7302% ( 2) 00:15:17.867 9.150 - 9.197: 98.7449% ( 2) 00:15:17.867 9.244 - 9.292: 98.7523% ( 1) 00:15:17.867 9.434 - 9.481: 98.7671% ( 2) 00:15:17.867 10.003 - 10.050: 98.7818% ( 2) 00:15:17.867 10.430 - 10.477: 98.7892% ( 1) 00:15:17.867 10.477 - 10.524: 98.8040% ( 2) 00:15:17.867 10.667 - 10.714: 98.8114% ( 1) 00:15:17.867 10.809 - 10.856: 98.8188% ( 1) 00:15:17.867 11.046 - 11.093: 98.8261% ( 1) 00:15:17.867 11.236 - 11.283: 98.8335% ( 1) 00:15:17.867 11.567 - 11.615: 98.8409% ( 1) 00:15:17.867 11.804 - 11.852: 98.8483% ( 1) 00:15:17.867 11.947 - 11.994: 98.8557% ( 1) 00:15:17.867 12.089 - 12.136: 98.8704% ( 2) 00:15:17.867 12.326 - 12.421: 98.8852% ( 2) 00:15:17.867 12.421 - 12.516: 98.8926% ( 1) 00:15:17.867 12.516 - 12.610: 98.9000% ( 1) 00:15:17.867 12.705 - 12.800: 98.9073% ( 1) 00:15:17.867 13.084 - 13.179: 98.9147% ( 1) 00:15:17.867 13.179 - 13.274: 98.9295% ( 2) 00:15:17.867 13.369 - 13.464: 98.9369% ( 1) 00:15:17.867 13.748 - 13.843: 98.9443% ( 1) 00:15:17.867 14.033 - 14.127: 98.9516% ( 1) 00:15:17.867 14.317 - 14.412: 98.9590% ( 1) 00:15:17.867 14.412 - 14.507: 98.9738% ( 2) 00:15:17.867 14.507 - 14.601: 98.9812% ( 1) 00:15:17.867 14.696 - 14.791: 98.9886% ( 1) 00:15:17.867 14.791 - 14.886: 98.9959% ( 1) 00:15:17.867 14.886 - 14.981: 99.0033% ( 1) 00:15:17.867 17.256 - 17.351: 99.0329% ( 4) 00:15:17.867 17.351 - 17.446: 99.0402% ( 1) 00:15:17.867 17.446 - 17.541: 99.0550% ( 2) 00:15:17.867 17.541 - 17.636: 99.1067% ( 7) 00:15:17.867 17.636 - 17.730: 99.1657% ( 8) 00:15:17.867 17.730 - 17.825: 99.2100% ( 6) 00:15:17.867 17.825 - 17.920: 99.2617% ( 7) 00:15:17.867 17.920 - 18.015: 99.2839% ( 3) 00:15:17.867 18.015 - 18.110: 99.3651% ( 11) 00:15:17.867 18.110 - 18.204: 99.4537% ( 12) 00:15:17.867 18.204 - 18.299: 99.5866% ( 18) 00:15:17.867 18.299 - 18.394: 99.6235% ( 5) 00:15:17.867 18.394 - 18.489: 99.6825% ( 8) 00:15:17.867 18.489 - 18.584: 99.7121% ( 4) 00:15:17.867 18.584 - 18.679: 99.7342% ( 3) 00:15:17.867 18.679 - 18.773: 99.7638% ( 4) 00:15:17.867 18.773 - 18.868: 99.7933% ( 4) 00:15:17.867 18.868 - 18.963: 99.8080% ( 2) 00:15:17.867 18.963 - 19.058: 99.8154% ( 1) 00:15:17.867 19.153 - 19.247: 99.8228% ( 1) 00:15:17.867 19.247 - 19.342: 99.8376% ( 2) 00:15:17.867 19.532 - 19.627: 99.8450% ( 1) 00:15:17.867 19.721 - 19.816: 99.8523% ( 1) 00:15:17.867 20.006 - 20.101: 99.8597% ( 1) 00:15:17.867 21.997 - 22.092: 99.8671% ( 1) 00:15:17.867 23.704 - 23.799: 99.8745% ( 1) 00:15:17.867 3980.705 - 4004.978: 99.9852% ( 15) 00:15:17.867 5995.330 - 6019.603: 99.9926% ( 1) 00:15:17.867 6990.507 - 7039.052: 100.0000% ( 1) 00:15:17.867 00:15:17.867 Complete histogram 00:15:17.867 ================== 00:15:17.867 Range in us Cumulative Count 00:15:17.867 2.050 - 2.062: 0.0664% ( 9) 00:15:17.867 2.062 - 2.074: 18.2650% ( 2465) 00:15:17.867 2.074 - 2.086: 39.4094% ( 2864) 00:15:17.867 2.086 - 2.098: 42.6652% ( 441) 00:15:17.867 2.098 - 2.110: 54.4481% ( 1596) 00:15:17.867 2.110 - 2.121: 61.6980% ( 982) 00:15:17.867 2.121 - 2.133: 63.9646% ( 307) 00:15:17.867 2.133 - 2.145: 74.1233% ( 1376) 00:15:17.867 2.145 - 2.157: 79.7933% ( 768) 00:15:17.867 2.157 - 2.169: 81.7054% ( 259) 00:15:17.867 2.169 - 2.181: 86.5855% ( 661) 00:15:17.867 2.181 - 2.193: 89.2285% ( 358) 00:15:17.867 2.193 - 2.204: 90.1735% ( 128) 00:15:17.867 2.204 - 2.216: 91.4064% ( 167) 00:15:17.867 2.216 - 2.228: 92.8461% ( 195) 00:15:17.867 2.228 - 2.240: 94.4481% ( 217) 00:15:17.868 2.240 - 2.252: 95.2381% ( 107) 00:15:17.868 2.252 - 2.264: 95.5113% ( 37) 00:15:17.868 2.264 - 2.276: 95.6294% ( 16) 00:15:17.868 2.276 - 2.287: 95.7475% ( 16) 00:15:17.868 2.287 - 2.299: 95.9468% ( 27) 00:15:17.868 2.299 - 2.311: 96.1905% ( 33) 00:15:17.868 2.311 - 2.323: 96.2865% ( 13) 00:15:17.868 2.323 - 2.335: 96.3455% ( 8) 00:15:17.868 2.335 - 2.347: 96.3898% ( 6) 00:15:17.868 2.347 - 2.359: 96.5375% ( 20) 00:15:17.868 2.359 - 2.370: 96.9731% ( 59) 00:15:17.868 2.370 - 2.382: 97.3348% ( 49) 00:15:17.868 2.382 - 2.394: 97.7999% ( 63) 00:15:17.868 2.394 - 2.406: 98.0509% ( 34) 00:15:17.868 2.406 - 2.418: 98.1912% ( 19) 00:15:17.868 2.418 - 2.430: 98.2798% ( 12) 00:15:17.868 2.430 - 2.441: 98.3463% ( 9) 00:15:17.868 2.441 - 2.453: 98.4053% ( 8) 00:15:17.868 2.453 - 2.465: 98.4422% ( 5) 00:15:17.868 2.465 - 2.477: 98.4718% ( 4) 00:15:17.868 2.477 - 2.489: 98.4865% ( 2) 00:15:17.868 2.489 - 2.501: 98.5013% ( 2) 00:15:17.868 2.501 - 2.513: 98.5087% ( 1) 00:15:17.868 2.513 - 2.524: 98.5308% ( 3) 00:15:17.868 2.524 - 2.536: 98.5382% ( 1) 00:15:17.868 2.536 - 2.548: 98.5677% ( 4) 00:15:17.868 2.560 - 2.572: 98.5825% ( 2) 00:15:17.868 2.631 - 2.643: 98.5899% ( 1) 00:15:17.868 2.643 - 2.655: 98.6047% ( 2) 00:15:17.868 2.679 - 2.690: 98.6120% ( 1) 00:15:17.868 2.714 - 2.726: 98.6194% ( 1) 00:15:17.868 2.797 - 2.809: 98.6268% ( 1) 00:15:17.868 3.271 - 3.295: 98.6342% ( 1) 00:15:17.868 3.295 - 3.319: 98.6416% ( 1) 00:15:17.868 3.342 - 3.366: 98.6489% ( 1) 00:15:17.868 3.366 - 3.390: 98.6563% ( 1) 00:15:17.868 3.390 - 3.413: 98.6637% ( 1) 00:15:17.868 3.413 - 3.437: 98.6785% ( 2) 00:15:17.868 3.461 - 3.484: 98.7006% ( 3) 00:15:17.868 3.484 - 3.508: 98.7154% ( 2) 00:15:17.868 3.508 - 3.532: 98.7228% ( 1) 00:15:17.868 3.532 - 3.556: 98.7449% ( 3) 00:15:17.868 3.556 - 3.579: 98.7597% ( 2) 00:15:17.868 3.579 - 3.603: 98.7745% ( 2) 00:15:17.868 3.627 - 3.650: 98.7818% ( 1) 00:15:17.868 3.650 - 3.674: 98.7966% ( 2) 00:15:17.868 3.674 - 3.698: 98.8114% ( 2) 00:15:17.868 3.721 - 3.745: 98.8409% ( 4) 00:15:17.868 3.745 - 3.769: 98.8630% ( 3) 00:15:17.868 4.077 - 4.101: 98.8704% ( 1) 00:15:17.868 4.954 - 4.978: 98.8926% ( 3) 00:15:17.868 5.191 - 5.215: 98.9000% ( 1) 00:15:17.868 5.997 - 6.021: 98.9073% ( 1) 00:15:17.868 6.021 - 6.044: 98.9295% ( 3) 00:15:17.868 6.068 - 6.116: 98.9443% ( 2) 00:15:17.868 6.116 - 6.163: 98.9664% ( 3) 00:15:17.868 6.163 - 6.210: 98.9738% ( 1) 00:15:17.868 6.210 - 6.258: 98.9812% ( 1) 00:15:17.868 6.258 - 6.305: 98.9886% ( 1) 00:15:17.868 6.305 - 6.353: 98.9959% ( 1) 00:15:17.868 6.542 - 6.590: 99.0033% ( 1) 00:15:17.868 6.874 - 6.921: 99.0107% ( 1) 00:15:17.868 6.969 - 7.016: 99.0181% ( 1) 00:15:17.868 7.064 - 7.111: 99.0255% ( 1) 00:15:17.868 9.813 - 9.861: 99.0329% ( 1) 00:15:17.868 15.455 - 15.550: 99.0402% ( 1) 00:15:17.868 15.739 - 15.834: 99.0550% ( 2) 00:15:17.868 15.834 - 15.929: 99.0845% ( 4) 00:15:17.868 15.929 - 16.024: 99.1288% ( 6) 00:15:17.868 16.024 - 16.119: 99.1436% ( 2) 00:15:17.868 16.119 - 16.213: 99.1584% ( 2) 00:15:17.868 16.213 - 16.308: 99.1805% ( 3) 00:15:17.868 16.308 - 16.403: 99.2027% ( 3) 00:15:17.868 16.403 - 16.498: 9[2024-07-25 19:45:27.141652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.868 9.2174% ( 2) 00:15:17.868 16.498 - 16.593: 99.2543% ( 5) 00:15:17.868 16.593 - 16.687: 99.2839% ( 4) 00:15:17.868 16.687 - 16.782: 99.3134% ( 4) 00:15:17.868 16.782 - 16.877: 99.3429% ( 4) 00:15:17.868 16.877 - 16.972: 99.3872% ( 6) 00:15:17.868 16.972 - 17.067: 99.4094% ( 3) 00:15:17.868 17.067 - 17.161: 99.4241% ( 2) 00:15:17.868 17.636 - 17.730: 99.4315% ( 1) 00:15:17.868 17.730 - 17.825: 99.4389% ( 1) 00:15:17.868 18.110 - 18.204: 99.4463% ( 1) 00:15:17.868 18.299 - 18.394: 99.4611% ( 2) 00:15:17.868 18.679 - 18.773: 99.4684% ( 1) 00:15:17.868 1013.381 - 1019.449: 99.4758% ( 1) 00:15:17.868 3592.344 - 3616.616: 99.4832% ( 1) 00:15:17.868 3980.705 - 4004.978: 99.9483% ( 63) 00:15:17.868 4004.978 - 4029.250: 99.9926% ( 6) 00:15:17.868 6990.507 - 7039.052: 100.0000% ( 1) 00:15:17.868 00:15:17.868 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:17.868 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:17.868 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:17.868 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:17.868 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.126 [ 00:15:18.126 { 00:15:18.126 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.126 "subtype": "Discovery", 00:15:18.126 "listen_addresses": [], 00:15:18.126 "allow_any_host": true, 00:15:18.126 "hosts": [] 00:15:18.126 }, 00:15:18.126 { 00:15:18.126 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.126 "subtype": "NVMe", 00:15:18.126 "listen_addresses": [ 00:15:18.126 { 00:15:18.126 "trtype": "VFIOUSER", 00:15:18.126 "adrfam": "IPv4", 00:15:18.126 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.126 "trsvcid": "0" 00:15:18.126 } 00:15:18.126 ], 00:15:18.126 "allow_any_host": true, 00:15:18.126 "hosts": [], 00:15:18.126 "serial_number": "SPDK1", 00:15:18.126 "model_number": "SPDK bdev Controller", 00:15:18.126 "max_namespaces": 32, 00:15:18.126 "min_cntlid": 1, 00:15:18.126 "max_cntlid": 65519, 00:15:18.126 "namespaces": [ 00:15:18.126 { 00:15:18.126 "nsid": 1, 00:15:18.126 "bdev_name": "Malloc1", 00:15:18.126 "name": "Malloc1", 00:15:18.126 "nguid": "0A64A9325588468B890EC0FC0CB439AF", 00:15:18.126 "uuid": "0a64a932-5588-468b-890e-c0fc0cb439af" 00:15:18.126 } 00:15:18.126 ] 00:15:18.126 }, 00:15:18.126 { 00:15:18.126 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.126 "subtype": "NVMe", 00:15:18.126 "listen_addresses": [ 00:15:18.126 { 00:15:18.126 "trtype": "VFIOUSER", 00:15:18.126 "adrfam": "IPv4", 00:15:18.126 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.126 "trsvcid": "0" 00:15:18.126 } 00:15:18.126 ], 00:15:18.126 "allow_any_host": true, 00:15:18.126 "hosts": [], 00:15:18.126 "serial_number": "SPDK2", 00:15:18.126 "model_number": "SPDK bdev Controller", 00:15:18.126 "max_namespaces": 32, 00:15:18.126 "min_cntlid": 1, 00:15:18.126 "max_cntlid": 65519, 00:15:18.126 "namespaces": [ 00:15:18.126 { 00:15:18.126 "nsid": 1, 00:15:18.126 "bdev_name": "Malloc2", 00:15:18.126 "name": "Malloc2", 00:15:18.126 "nguid": "9C0AE9A3659A4863A2834738AE5DC717", 00:15:18.126 "uuid": "9c0ae9a3-659a-4863-a283-4738ae5dc717" 00:15:18.126 } 00:15:18.126 ] 00:15:18.126 } 00:15:18.126 ] 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3940521 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:18.126 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:18.126 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.384 [2024-07-25 19:45:27.636556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.384 Malloc3 00:15:18.384 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:18.641 [2024-07-25 19:45:27.992974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.641 19:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.641 Asynchronous Event Request test 00:15:18.641 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.641 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.641 Registering asynchronous event callbacks... 00:15:18.641 Starting namespace attribute notice tests for all controllers... 00:15:18.641 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:18.641 aer_cb - Changed Namespace 00:15:18.641 Cleaning up... 00:15:18.899 [ 00:15:18.899 { 00:15:18.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.899 "subtype": "Discovery", 00:15:18.899 "listen_addresses": [], 00:15:18.899 "allow_any_host": true, 00:15:18.899 "hosts": [] 00:15:18.899 }, 00:15:18.899 { 00:15:18.899 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.899 "subtype": "NVMe", 00:15:18.899 "listen_addresses": [ 00:15:18.899 { 00:15:18.899 "trtype": "VFIOUSER", 00:15:18.899 "adrfam": "IPv4", 00:15:18.899 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.899 "trsvcid": "0" 00:15:18.899 } 00:15:18.899 ], 00:15:18.899 "allow_any_host": true, 00:15:18.899 "hosts": [], 00:15:18.899 "serial_number": "SPDK1", 00:15:18.899 "model_number": "SPDK bdev Controller", 00:15:18.899 "max_namespaces": 32, 00:15:18.899 "min_cntlid": 1, 00:15:18.899 "max_cntlid": 65519, 00:15:18.899 "namespaces": [ 00:15:18.899 { 00:15:18.899 "nsid": 1, 00:15:18.899 "bdev_name": "Malloc1", 00:15:18.899 "name": "Malloc1", 00:15:18.899 "nguid": "0A64A9325588468B890EC0FC0CB439AF", 00:15:18.899 "uuid": "0a64a932-5588-468b-890e-c0fc0cb439af" 00:15:18.899 }, 00:15:18.899 { 00:15:18.899 "nsid": 2, 00:15:18.899 "bdev_name": "Malloc3", 00:15:18.899 "name": "Malloc3", 00:15:18.899 "nguid": "1277611322E9480EA2C0DAB7AE792DFF", 00:15:18.899 "uuid": "12776113-22e9-480e-a2c0-dab7ae792dff" 00:15:18.899 } 00:15:18.899 ] 00:15:18.899 }, 00:15:18.899 { 00:15:18.899 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.899 "subtype": "NVMe", 00:15:18.899 "listen_addresses": [ 00:15:18.899 { 00:15:18.899 "trtype": "VFIOUSER", 00:15:18.899 "adrfam": "IPv4", 00:15:18.899 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.899 "trsvcid": "0" 00:15:18.899 } 00:15:18.899 ], 00:15:18.899 "allow_any_host": true, 00:15:18.899 "hosts": [], 00:15:18.899 "serial_number": "SPDK2", 00:15:18.899 "model_number": "SPDK bdev Controller", 00:15:18.899 "max_namespaces": 32, 00:15:18.899 "min_cntlid": 1, 00:15:18.899 "max_cntlid": 65519, 00:15:18.899 "namespaces": [ 00:15:18.899 { 00:15:18.899 "nsid": 1, 00:15:18.899 "bdev_name": "Malloc2", 00:15:18.899 "name": "Malloc2", 00:15:18.899 "nguid": "9C0AE9A3659A4863A2834738AE5DC717", 00:15:18.899 "uuid": "9c0ae9a3-659a-4863-a283-4738ae5dc717" 00:15:18.899 } 00:15:18.899 ] 00:15:18.899 } 00:15:18.899 ] 00:15:18.899 19:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3940521 00:15:18.899 19:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.899 19:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:18.899 19:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:18.899 19:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:18.899 [2024-07-25 19:45:28.268025] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:15:18.899 [2024-07-25 19:45:28.268086] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940658 ] 00:15:18.899 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.899 [2024-07-25 19:45:28.302084] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:18.899 [2024-07-25 19:45:28.311385] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.899 [2024-07-25 19:45:28.311414] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fad4605f000 00:15:18.899 [2024-07-25 19:45:28.312387] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.313386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.314397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.315396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.316404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.317425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.318414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.319421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.899 [2024-07-25 19:45:28.320430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.899 [2024-07-25 19:45:28.320452] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fad44e11000 00:15:18.899 [2024-07-25 19:45:28.321838] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:19.158 [2024-07-25 19:45:28.338623] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:19.158 [2024-07-25 19:45:28.338656] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:19.158 [2024-07-25 19:45:28.343782] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:19.158 [2024-07-25 19:45:28.343831] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:19.158 [2024-07-25 19:45:28.343915] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:19.158 [2024-07-25 19:45:28.343936] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:19.158 [2024-07-25 19:45:28.343946] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:19.158 [2024-07-25 19:45:28.344789] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:19.158 [2024-07-25 19:45:28.344813] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:19.158 [2024-07-25 19:45:28.344826] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:19.158 [2024-07-25 19:45:28.345795] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:19.158 [2024-07-25 19:45:28.345814] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:19.158 [2024-07-25 19:45:28.345827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:19.158 [2024-07-25 19:45:28.346801] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:19.158 [2024-07-25 19:45:28.346821] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:19.158 [2024-07-25 19:45:28.347803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:19.158 [2024-07-25 19:45:28.347822] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:19.158 [2024-07-25 19:45:28.347831] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:19.158 [2024-07-25 19:45:28.347842] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:19.158 [2024-07-25 19:45:28.347951] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:19.158 [2024-07-25 19:45:28.347958] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:19.158 [2024-07-25 19:45:28.347966] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:19.158 [2024-07-25 19:45:28.348814] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:19.158 [2024-07-25 19:45:28.349821] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:19.158 [2024-07-25 19:45:28.350824] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:19.158 [2024-07-25 19:45:28.351815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.158 [2024-07-25 19:45:28.351898] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:19.158 [2024-07-25 19:45:28.352838] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:19.158 [2024-07-25 19:45:28.352857] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:19.158 [2024-07-25 19:45:28.352866] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:19.158 [2024-07-25 19:45:28.352889] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:19.158 [2024-07-25 19:45:28.352901] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:19.158 [2024-07-25 19:45:28.352922] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:19.158 [2024-07-25 19:45:28.352931] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.158 [2024-07-25 19:45:28.352948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.158 [2024-07-25 19:45:28.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:19.158 [2024-07-25 19:45:28.361101] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:19.158 [2024-07-25 19:45:28.361112] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:19.158 [2024-07-25 19:45:28.361120] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:19.158 [2024-07-25 19:45:28.361127] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:19.158 [2024-07-25 19:45:28.361135] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:19.158 [2024-07-25 19:45:28.361143] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:19.158 [2024-07-25 19:45:28.361151] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:19.158 [2024-07-25 19:45:28.361163] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.361179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.369070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.369093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.159 [2024-07-25 19:45:28.369107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.159 [2024-07-25 19:45:28.369119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.159 [2024-07-25 19:45:28.369131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.159 [2024-07-25 19:45:28.369139] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.369158] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.369173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.377069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.377086] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:19.159 [2024-07-25 19:45:28.377096] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.377107] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.377121] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.377136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.385072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.385145] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.385162] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.385174] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:19.159 [2024-07-25 19:45:28.385183] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:19.159 [2024-07-25 19:45:28.385193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.393084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.393105] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:19.159 [2024-07-25 19:45:28.393125] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.393139] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.393151] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:19.159 [2024-07-25 19:45:28.393159] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.159 [2024-07-25 19:45:28.393169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.401071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.401098] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.401112] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.401125] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:19.159 [2024-07-25 19:45:28.401137] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.159 [2024-07-25 19:45:28.401148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.409071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.409092] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.409103] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.409117] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.409127] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.409135] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.409144] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:19.159 [2024-07-25 19:45:28.409152] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:19.159 [2024-07-25 19:45:28.409160] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:19.159 [2024-07-25 19:45:28.409189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.417071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.417097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.425071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.425095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.433095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.441066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.441093] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:19.159 [2024-07-25 19:45:28.441102] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:19.159 [2024-07-25 19:45:28.441109] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:19.159 [2024-07-25 19:45:28.441115] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:19.159 [2024-07-25 19:45:28.441125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:19.159 [2024-07-25 19:45:28.441136] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:19.159 [2024-07-25 19:45:28.441144] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:19.159 [2024-07-25 19:45:28.441157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.441168] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:19.159 [2024-07-25 19:45:28.441176] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.159 [2024-07-25 19:45:28.441185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.441197] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:19.159 [2024-07-25 19:45:28.441205] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:19.159 [2024-07-25 19:45:28.441213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:19.159 [2024-07-25 19:45:28.449069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.449097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.449112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:19.159 [2024-07-25 19:45:28.449126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:19.159 ===================================================== 00:15:19.159 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.159 ===================================================== 00:15:19.159 Controller Capabilities/Features 00:15:19.159 ================================ 00:15:19.159 Vendor ID: 4e58 00:15:19.159 Subsystem Vendor ID: 4e58 00:15:19.159 Serial Number: SPDK2 00:15:19.159 Model Number: SPDK bdev Controller 00:15:19.159 Firmware Version: 24.05.1 00:15:19.159 Recommended Arb Burst: 6 00:15:19.159 IEEE OUI Identifier: 8d 6b 50 00:15:19.159 Multi-path I/O 00:15:19.159 May have multiple subsystem ports: Yes 00:15:19.159 May have multiple controllers: Yes 00:15:19.159 Associated with SR-IOV VF: No 00:15:19.159 Max Data Transfer Size: 131072 00:15:19.159 Max Number of Namespaces: 32 00:15:19.159 Max Number of I/O Queues: 127 00:15:19.159 NVMe Specification Version (VS): 1.3 00:15:19.159 NVMe Specification Version (Identify): 1.3 00:15:19.159 Maximum Queue Entries: 256 00:15:19.159 Contiguous Queues Required: Yes 00:15:19.159 Arbitration Mechanisms Supported 00:15:19.159 Weighted Round Robin: Not Supported 00:15:19.159 Vendor Specific: Not Supported 00:15:19.159 Reset Timeout: 15000 ms 00:15:19.160 Doorbell Stride: 4 bytes 00:15:19.160 NVM Subsystem Reset: Not Supported 00:15:19.160 Command Sets Supported 00:15:19.160 NVM Command Set: Supported 00:15:19.160 Boot Partition: Not Supported 00:15:19.160 Memory Page Size Minimum: 4096 bytes 00:15:19.160 Memory Page Size Maximum: 4096 bytes 00:15:19.160 Persistent Memory Region: Not Supported 00:15:19.160 Optional Asynchronous Events Supported 00:15:19.160 Namespace Attribute Notices: Supported 00:15:19.160 Firmware Activation Notices: Not Supported 00:15:19.160 ANA Change Notices: Not Supported 00:15:19.160 PLE Aggregate Log Change Notices: Not Supported 00:15:19.160 LBA Status Info Alert Notices: Not Supported 00:15:19.160 EGE Aggregate Log Change Notices: Not Supported 00:15:19.160 Normal NVM Subsystem Shutdown event: Not Supported 00:15:19.160 Zone Descriptor Change Notices: Not Supported 00:15:19.160 Discovery Log Change Notices: Not Supported 00:15:19.160 Controller Attributes 00:15:19.160 128-bit Host Identifier: Supported 00:15:19.160 Non-Operational Permissive Mode: Not Supported 00:15:19.160 NVM Sets: Not Supported 00:15:19.160 Read Recovery Levels: Not Supported 00:15:19.160 Endurance Groups: Not Supported 00:15:19.160 Predictable Latency Mode: Not Supported 00:15:19.160 Traffic Based Keep ALive: Not Supported 00:15:19.160 Namespace Granularity: Not Supported 00:15:19.160 SQ Associations: Not Supported 00:15:19.160 UUID List: Not Supported 00:15:19.160 Multi-Domain Subsystem: Not Supported 00:15:19.160 Fixed Capacity Management: Not Supported 00:15:19.160 Variable Capacity Management: Not Supported 00:15:19.160 Delete Endurance Group: Not Supported 00:15:19.160 Delete NVM Set: Not Supported 00:15:19.160 Extended LBA Formats Supported: Not Supported 00:15:19.160 Flexible Data Placement Supported: Not Supported 00:15:19.160 00:15:19.160 Controller Memory Buffer Support 00:15:19.160 ================================ 00:15:19.160 Supported: No 00:15:19.160 00:15:19.160 Persistent Memory Region Support 00:15:19.160 ================================ 00:15:19.160 Supported: No 00:15:19.160 00:15:19.160 Admin Command Set Attributes 00:15:19.160 ============================ 00:15:19.160 Security Send/Receive: Not Supported 00:15:19.160 Format NVM: Not Supported 00:15:19.160 Firmware Activate/Download: Not Supported 00:15:19.160 Namespace Management: Not Supported 00:15:19.160 Device Self-Test: Not Supported 00:15:19.160 Directives: Not Supported 00:15:19.160 NVMe-MI: Not Supported 00:15:19.160 Virtualization Management: Not Supported 00:15:19.160 Doorbell Buffer Config: Not Supported 00:15:19.160 Get LBA Status Capability: Not Supported 00:15:19.160 Command & Feature Lockdown Capability: Not Supported 00:15:19.160 Abort Command Limit: 4 00:15:19.160 Async Event Request Limit: 4 00:15:19.160 Number of Firmware Slots: N/A 00:15:19.160 Firmware Slot 1 Read-Only: N/A 00:15:19.160 Firmware Activation Without Reset: N/A 00:15:19.160 Multiple Update Detection Support: N/A 00:15:19.160 Firmware Update Granularity: No Information Provided 00:15:19.160 Per-Namespace SMART Log: No 00:15:19.160 Asymmetric Namespace Access Log Page: Not Supported 00:15:19.160 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:19.160 Command Effects Log Page: Supported 00:15:19.160 Get Log Page Extended Data: Supported 00:15:19.160 Telemetry Log Pages: Not Supported 00:15:19.160 Persistent Event Log Pages: Not Supported 00:15:19.160 Supported Log Pages Log Page: May Support 00:15:19.160 Commands Supported & Effects Log Page: Not Supported 00:15:19.160 Feature Identifiers & Effects Log Page:May Support 00:15:19.160 NVMe-MI Commands & Effects Log Page: May Support 00:15:19.160 Data Area 4 for Telemetry Log: Not Supported 00:15:19.160 Error Log Page Entries Supported: 128 00:15:19.160 Keep Alive: Supported 00:15:19.160 Keep Alive Granularity: 10000 ms 00:15:19.160 00:15:19.160 NVM Command Set Attributes 00:15:19.160 ========================== 00:15:19.160 Submission Queue Entry Size 00:15:19.160 Max: 64 00:15:19.160 Min: 64 00:15:19.160 Completion Queue Entry Size 00:15:19.160 Max: 16 00:15:19.160 Min: 16 00:15:19.160 Number of Namespaces: 32 00:15:19.160 Compare Command: Supported 00:15:19.160 Write Uncorrectable Command: Not Supported 00:15:19.160 Dataset Management Command: Supported 00:15:19.160 Write Zeroes Command: Supported 00:15:19.160 Set Features Save Field: Not Supported 00:15:19.160 Reservations: Not Supported 00:15:19.160 Timestamp: Not Supported 00:15:19.160 Copy: Supported 00:15:19.160 Volatile Write Cache: Present 00:15:19.160 Atomic Write Unit (Normal): 1 00:15:19.160 Atomic Write Unit (PFail): 1 00:15:19.160 Atomic Compare & Write Unit: 1 00:15:19.160 Fused Compare & Write: Supported 00:15:19.160 Scatter-Gather List 00:15:19.160 SGL Command Set: Supported (Dword aligned) 00:15:19.160 SGL Keyed: Not Supported 00:15:19.160 SGL Bit Bucket Descriptor: Not Supported 00:15:19.160 SGL Metadata Pointer: Not Supported 00:15:19.160 Oversized SGL: Not Supported 00:15:19.160 SGL Metadata Address: Not Supported 00:15:19.160 SGL Offset: Not Supported 00:15:19.160 Transport SGL Data Block: Not Supported 00:15:19.160 Replay Protected Memory Block: Not Supported 00:15:19.160 00:15:19.160 Firmware Slot Information 00:15:19.160 ========================= 00:15:19.160 Active slot: 1 00:15:19.160 Slot 1 Firmware Revision: 24.05.1 00:15:19.160 00:15:19.160 00:15:19.160 Commands Supported and Effects 00:15:19.160 ============================== 00:15:19.160 Admin Commands 00:15:19.160 -------------- 00:15:19.160 Get Log Page (02h): Supported 00:15:19.160 Identify (06h): Supported 00:15:19.160 Abort (08h): Supported 00:15:19.160 Set Features (09h): Supported 00:15:19.160 Get Features (0Ah): Supported 00:15:19.160 Asynchronous Event Request (0Ch): Supported 00:15:19.160 Keep Alive (18h): Supported 00:15:19.160 I/O Commands 00:15:19.160 ------------ 00:15:19.160 Flush (00h): Supported LBA-Change 00:15:19.160 Write (01h): Supported LBA-Change 00:15:19.160 Read (02h): Supported 00:15:19.160 Compare (05h): Supported 00:15:19.160 Write Zeroes (08h): Supported LBA-Change 00:15:19.160 Dataset Management (09h): Supported LBA-Change 00:15:19.160 Copy (19h): Supported LBA-Change 00:15:19.160 Unknown (79h): Supported LBA-Change 00:15:19.160 Unknown (7Ah): Supported 00:15:19.160 00:15:19.160 Error Log 00:15:19.160 ========= 00:15:19.160 00:15:19.160 Arbitration 00:15:19.160 =========== 00:15:19.160 Arbitration Burst: 1 00:15:19.160 00:15:19.160 Power Management 00:15:19.160 ================ 00:15:19.160 Number of Power States: 1 00:15:19.160 Current Power State: Power State #0 00:15:19.160 Power State #0: 00:15:19.160 Max Power: 0.00 W 00:15:19.160 Non-Operational State: Operational 00:15:19.160 Entry Latency: Not Reported 00:15:19.160 Exit Latency: Not Reported 00:15:19.160 Relative Read Throughput: 0 00:15:19.160 Relative Read Latency: 0 00:15:19.160 Relative Write Throughput: 0 00:15:19.160 Relative Write Latency: 0 00:15:19.160 Idle Power: Not Reported 00:15:19.160 Active Power: Not Reported 00:15:19.160 Non-Operational Permissive Mode: Not Supported 00:15:19.160 00:15:19.160 Health Information 00:15:19.160 ================== 00:15:19.160 Critical Warnings: 00:15:19.160 Available Spare Space: OK 00:15:19.160 Temperature: OK 00:15:19.160 Device Reliability: OK 00:15:19.160 Read Only: No 00:15:19.160 Volatile Memory Backup: OK 00:15:19.160 Current Temperature: 0 Kelvin[2024-07-25 19:45:28.449243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:19.160 [2024-07-25 19:45:28.457068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:19.160 [2024-07-25 19:45:28.457111] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:19.160 [2024-07-25 19:45:28.457128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.160 [2024-07-25 19:45:28.457139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.160 [2024-07-25 19:45:28.457149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.160 [2024-07-25 19:45:28.457158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.160 [2024-07-25 19:45:28.457219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:19.160 [2024-07-25 19:45:28.457239] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:19.160 [2024-07-25 19:45:28.458218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.161 [2024-07-25 19:45:28.458303] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:19.161 [2024-07-25 19:45:28.458318] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:19.161 [2024-07-25 19:45:28.459232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:19.161 [2024-07-25 19:45:28.459256] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:19.161 [2024-07-25 19:45:28.459306] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:19.161 [2024-07-25 19:45:28.460491] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:19.161 (-273 Celsius) 00:15:19.161 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:19.161 Available Spare: 0% 00:15:19.161 Available Spare Threshold: 0% 00:15:19.161 Life Percentage Used: 0% 00:15:19.161 Data Units Read: 0 00:15:19.161 Data Units Written: 0 00:15:19.161 Host Read Commands: 0 00:15:19.161 Host Write Commands: 0 00:15:19.161 Controller Busy Time: 0 minutes 00:15:19.161 Power Cycles: 0 00:15:19.161 Power On Hours: 0 hours 00:15:19.161 Unsafe Shutdowns: 0 00:15:19.161 Unrecoverable Media Errors: 0 00:15:19.161 Lifetime Error Log Entries: 0 00:15:19.161 Warning Temperature Time: 0 minutes 00:15:19.161 Critical Temperature Time: 0 minutes 00:15:19.161 00:15:19.161 Number of Queues 00:15:19.161 ================ 00:15:19.161 Number of I/O Submission Queues: 127 00:15:19.161 Number of I/O Completion Queues: 127 00:15:19.161 00:15:19.161 Active Namespaces 00:15:19.161 ================= 00:15:19.161 Namespace ID:1 00:15:19.161 Error Recovery Timeout: Unlimited 00:15:19.161 Command Set Identifier: NVM (00h) 00:15:19.161 Deallocate: Supported 00:15:19.161 Deallocated/Unwritten Error: Not Supported 00:15:19.161 Deallocated Read Value: Unknown 00:15:19.161 Deallocate in Write Zeroes: Not Supported 00:15:19.161 Deallocated Guard Field: 0xFFFF 00:15:19.161 Flush: Supported 00:15:19.161 Reservation: Supported 00:15:19.161 Namespace Sharing Capabilities: Multiple Controllers 00:15:19.161 Size (in LBAs): 131072 (0GiB) 00:15:19.161 Capacity (in LBAs): 131072 (0GiB) 00:15:19.161 Utilization (in LBAs): 131072 (0GiB) 00:15:19.161 NGUID: 9C0AE9A3659A4863A2834738AE5DC717 00:15:19.161 UUID: 9c0ae9a3-659a-4863-a283-4738ae5dc717 00:15:19.161 Thin Provisioning: Not Supported 00:15:19.161 Per-NS Atomic Units: Yes 00:15:19.161 Atomic Boundary Size (Normal): 0 00:15:19.161 Atomic Boundary Size (PFail): 0 00:15:19.161 Atomic Boundary Offset: 0 00:15:19.161 Maximum Single Source Range Length: 65535 00:15:19.161 Maximum Copy Length: 65535 00:15:19.161 Maximum Source Range Count: 1 00:15:19.161 NGUID/EUI64 Never Reused: No 00:15:19.161 Namespace Write Protected: No 00:15:19.161 Number of LBA Formats: 1 00:15:19.161 Current LBA Format: LBA Format #00 00:15:19.161 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:19.161 00:15:19.161 19:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:19.161 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.418 [2024-07-25 19:45:28.684773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.678 Initializing NVMe Controllers 00:15:24.679 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.679 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.679 Initialization complete. Launching workers. 00:15:24.679 ======================================================== 00:15:24.679 Latency(us) 00:15:24.679 Device Information : IOPS MiB/s Average min max 00:15:24.679 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35818.79 139.92 3572.78 1184.29 10503.11 00:15:24.679 ======================================================== 00:15:24.679 Total : 35818.79 139.92 3572.78 1184.29 10503.11 00:15:24.679 00:15:24.679 [2024-07-25 19:45:33.791409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.679 19:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:24.679 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.679 [2024-07-25 19:45:34.023107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.942 Initializing NVMe Controllers 00:15:29.942 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.942 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.942 Initialization complete. Launching workers. 00:15:29.942 ======================================================== 00:15:29.942 Latency(us) 00:15:29.942 Device Information : IOPS MiB/s Average min max 00:15:29.942 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33925.99 132.52 3772.23 1201.18 7723.35 00:15:29.942 ======================================================== 00:15:29.942 Total : 33925.99 132.52 3772.23 1201.18 7723.35 00:15:29.942 00:15:29.942 [2024-07-25 19:45:39.044793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.942 19:45:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:29.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.942 [2024-07-25 19:45:39.257709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.249 [2024-07-25 19:45:44.386210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.249 Initializing NVMe Controllers 00:15:35.249 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:35.249 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:35.249 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:35.249 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:35.249 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:35.249 Initialization complete. Launching workers. 00:15:35.249 Starting thread on core 2 00:15:35.249 Starting thread on core 3 00:15:35.249 Starting thread on core 1 00:15:35.249 19:45:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:35.249 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.249 [2024-07-25 19:45:44.676706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.529 [2024-07-25 19:45:47.746508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.529 Initializing NVMe Controllers 00:15:38.529 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.529 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:38.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:38.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:38.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:38.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:38.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:38.529 Initialization complete. Launching workers. 00:15:38.529 Starting thread on core 1 with urgent priority queue 00:15:38.529 Starting thread on core 2 with urgent priority queue 00:15:38.529 Starting thread on core 3 with urgent priority queue 00:15:38.529 Starting thread on core 0 with urgent priority queue 00:15:38.529 SPDK bdev Controller (SPDK2 ) core 0: 3920.33 IO/s 25.51 secs/100000 ios 00:15:38.529 SPDK bdev Controller (SPDK2 ) core 1: 4264.00 IO/s 23.45 secs/100000 ios 00:15:38.529 SPDK bdev Controller (SPDK2 ) core 2: 4894.00 IO/s 20.43 secs/100000 ios 00:15:38.529 SPDK bdev Controller (SPDK2 ) core 3: 4609.67 IO/s 21.69 secs/100000 ios 00:15:38.529 ======================================================== 00:15:38.529 00:15:38.529 19:45:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.529 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.786 [2024-07-25 19:45:48.047552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.786 Initializing NVMe Controllers 00:15:38.786 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.786 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.786 Namespace ID: 1 size: 0GB 00:15:38.786 Initialization complete. 00:15:38.786 INFO: using host memory buffer for IO 00:15:38.786 Hello world! 00:15:38.786 [2024-07-25 19:45:48.059620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.786 19:45:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.786 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.043 [2024-07-25 19:45:48.352064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.414 Initializing NVMe Controllers 00:15:40.414 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.414 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.414 Initialization complete. Launching workers. 00:15:40.414 submit (in ns) avg, min, max = 7815.5, 3523.3, 4015037.8 00:15:40.414 complete (in ns) avg, min, max = 26709.3, 2060.0, 4996824.4 00:15:40.414 00:15:40.414 Submit histogram 00:15:40.414 ================ 00:15:40.414 Range in us Cumulative Count 00:15:40.414 3.508 - 3.532: 0.0745% ( 10) 00:15:40.414 3.532 - 3.556: 0.8717% ( 107) 00:15:40.414 3.556 - 3.579: 3.0249% ( 289) 00:15:40.414 3.579 - 3.603: 6.8619% ( 515) 00:15:40.414 3.603 - 3.627: 14.0963% ( 971) 00:15:40.414 3.627 - 3.650: 23.7595% ( 1297) 00:15:40.414 3.650 - 3.674: 32.7745% ( 1210) 00:15:40.414 3.674 - 3.698: 41.0371% ( 1109) 00:15:40.414 3.698 - 3.721: 48.8303% ( 1046) 00:15:40.414 3.721 - 3.745: 56.2807% ( 1000) 00:15:40.414 3.745 - 3.769: 61.0267% ( 637) 00:15:40.414 3.769 - 3.793: 65.3852% ( 585) 00:15:40.414 3.793 - 3.816: 68.5889% ( 430) 00:15:40.414 3.816 - 3.840: 71.9043% ( 445) 00:15:40.414 3.840 - 3.864: 75.3464% ( 462) 00:15:40.414 3.864 - 3.887: 79.1462% ( 510) 00:15:40.414 3.887 - 3.911: 82.4244% ( 440) 00:15:40.414 3.911 - 3.935: 85.4493% ( 406) 00:15:40.414 3.935 - 3.959: 87.6471% ( 295) 00:15:40.414 3.959 - 3.982: 89.5098% ( 250) 00:15:40.414 3.982 - 4.006: 91.0893% ( 212) 00:15:40.414 4.006 - 4.030: 92.3558% ( 170) 00:15:40.414 4.030 - 4.053: 93.4361% ( 145) 00:15:40.414 4.053 - 4.077: 94.4122% ( 131) 00:15:40.414 4.077 - 4.101: 95.2094% ( 107) 00:15:40.414 4.101 - 4.124: 95.7011% ( 66) 00:15:40.414 4.124 - 4.148: 96.1034% ( 54) 00:15:40.414 4.148 - 4.172: 96.3940% ( 39) 00:15:40.414 4.172 - 4.196: 96.5579% ( 22) 00:15:40.414 4.196 - 4.219: 96.6920% ( 18) 00:15:40.414 4.219 - 4.243: 96.8187% ( 17) 00:15:40.414 4.243 - 4.267: 96.9453% ( 17) 00:15:40.414 4.267 - 4.290: 97.0347% ( 12) 00:15:40.414 4.290 - 4.314: 97.1092% ( 10) 00:15:40.414 4.314 - 4.338: 97.1912% ( 11) 00:15:40.414 4.338 - 4.361: 97.2210% ( 4) 00:15:40.414 4.361 - 4.385: 97.2806% ( 8) 00:15:40.414 4.385 - 4.409: 97.3327% ( 7) 00:15:40.414 4.409 - 4.433: 97.3551% ( 3) 00:15:40.414 4.433 - 4.456: 97.3774% ( 3) 00:15:40.414 4.456 - 4.480: 97.3998% ( 3) 00:15:40.414 4.480 - 4.504: 97.4147% ( 2) 00:15:40.414 4.504 - 4.527: 97.4221% ( 1) 00:15:40.414 4.527 - 4.551: 97.4296% ( 1) 00:15:40.414 4.599 - 4.622: 97.4370% ( 1) 00:15:40.414 4.622 - 4.646: 97.4445% ( 1) 00:15:40.414 4.693 - 4.717: 97.4519% ( 1) 00:15:40.414 4.741 - 4.764: 97.4743% ( 3) 00:15:40.414 4.764 - 4.788: 97.4817% ( 1) 00:15:40.414 4.788 - 4.812: 97.5115% ( 4) 00:15:40.414 4.812 - 4.836: 97.5563% ( 6) 00:15:40.414 4.836 - 4.859: 97.6233% ( 9) 00:15:40.414 4.859 - 4.883: 97.6457% ( 3) 00:15:40.414 4.883 - 4.907: 97.6829% ( 5) 00:15:40.414 4.907 - 4.930: 97.7351% ( 7) 00:15:40.414 4.930 - 4.954: 97.7723% ( 5) 00:15:40.414 4.954 - 4.978: 97.8096% ( 5) 00:15:40.414 4.978 - 5.001: 97.8543% ( 6) 00:15:40.414 5.001 - 5.025: 97.8990% ( 6) 00:15:40.414 5.025 - 5.049: 97.9139% ( 2) 00:15:40.414 5.049 - 5.073: 97.9288% ( 2) 00:15:40.414 5.073 - 5.096: 97.9660% ( 5) 00:15:40.414 5.096 - 5.120: 97.9884% ( 3) 00:15:40.414 5.120 - 5.144: 98.0033% ( 2) 00:15:40.414 5.144 - 5.167: 98.0331% ( 4) 00:15:40.414 5.167 - 5.191: 98.0778% ( 6) 00:15:40.414 5.191 - 5.215: 98.1225% ( 6) 00:15:40.414 5.215 - 5.239: 98.1448% ( 3) 00:15:40.414 5.239 - 5.262: 98.1746% ( 4) 00:15:40.414 5.262 - 5.286: 98.1821% ( 1) 00:15:40.414 5.286 - 5.310: 98.1895% ( 1) 00:15:40.414 5.333 - 5.357: 98.1970% ( 1) 00:15:40.414 5.381 - 5.404: 98.2268% ( 4) 00:15:40.414 5.452 - 5.476: 98.2342% ( 1) 00:15:40.414 5.523 - 5.547: 98.2491% ( 2) 00:15:40.414 5.547 - 5.570: 98.2566% ( 1) 00:15:40.414 5.570 - 5.594: 98.2789% ( 3) 00:15:40.414 5.594 - 5.618: 98.2864% ( 1) 00:15:40.414 5.618 - 5.641: 98.2938% ( 1) 00:15:40.414 5.760 - 5.784: 98.3087% ( 2) 00:15:40.414 5.784 - 5.807: 98.3162% ( 1) 00:15:40.414 5.902 - 5.926: 98.3236% ( 1) 00:15:40.414 5.973 - 5.997: 98.3311% ( 1) 00:15:40.414 6.044 - 6.068: 98.3385% ( 1) 00:15:40.415 6.163 - 6.210: 98.3460% ( 1) 00:15:40.415 6.305 - 6.353: 98.3534% ( 1) 00:15:40.415 6.400 - 6.447: 98.3609% ( 1) 00:15:40.415 6.495 - 6.542: 98.3684% ( 1) 00:15:40.415 6.637 - 6.684: 98.3758% ( 1) 00:15:40.415 6.732 - 6.779: 98.3833% ( 1) 00:15:40.415 6.827 - 6.874: 98.3907% ( 1) 00:15:40.415 6.874 - 6.921: 98.4056% ( 2) 00:15:40.415 6.921 - 6.969: 98.4131% ( 1) 00:15:40.415 6.969 - 7.016: 98.4205% ( 1) 00:15:40.415 7.016 - 7.064: 98.4354% ( 2) 00:15:40.415 7.064 - 7.111: 98.4503% ( 2) 00:15:40.415 7.111 - 7.159: 98.4727% ( 3) 00:15:40.415 7.159 - 7.206: 98.4876% ( 2) 00:15:40.415 7.206 - 7.253: 98.5099% ( 3) 00:15:40.415 7.301 - 7.348: 98.5174% ( 1) 00:15:40.415 7.348 - 7.396: 98.5248% ( 1) 00:15:40.415 7.490 - 7.538: 98.5472% ( 3) 00:15:40.415 7.538 - 7.585: 98.5546% ( 1) 00:15:40.415 7.727 - 7.775: 98.5621% ( 1) 00:15:40.415 7.822 - 7.870: 98.5770% ( 2) 00:15:40.415 7.870 - 7.917: 98.5844% ( 1) 00:15:40.415 8.012 - 8.059: 98.5919% ( 1) 00:15:40.415 8.059 - 8.107: 98.5993% ( 1) 00:15:40.415 8.154 - 8.201: 98.6068% ( 1) 00:15:40.415 8.201 - 8.249: 98.6142% ( 1) 00:15:40.415 8.249 - 8.296: 98.6291% ( 2) 00:15:40.415 8.344 - 8.391: 98.6366% ( 1) 00:15:40.415 8.391 - 8.439: 98.6440% ( 1) 00:15:40.415 8.439 - 8.486: 98.6589% ( 2) 00:15:40.415 8.486 - 8.533: 98.6738% ( 2) 00:15:40.415 8.581 - 8.628: 98.6813% ( 1) 00:15:40.415 8.676 - 8.723: 98.6962% ( 2) 00:15:40.415 8.723 - 8.770: 98.7036% ( 1) 00:15:40.415 8.818 - 8.865: 98.7185% ( 2) 00:15:40.415 8.913 - 8.960: 98.7260% ( 1) 00:15:40.415 9.481 - 9.529: 98.7334% ( 1) 00:15:40.415 9.529 - 9.576: 98.7409% ( 1) 00:15:40.415 9.576 - 9.624: 98.7483% ( 1) 00:15:40.415 10.572 - 10.619: 98.7558% ( 1) 00:15:40.415 10.619 - 10.667: 98.7632% ( 1) 00:15:40.415 10.714 - 10.761: 98.7707% ( 1) 00:15:40.415 10.809 - 10.856: 98.7781% ( 1) 00:15:40.415 10.904 - 10.951: 98.7856% ( 1) 00:15:40.415 11.473 - 11.520: 98.7930% ( 1) 00:15:40.415 11.520 - 11.567: 98.8005% ( 1) 00:15:40.415 11.567 - 11.615: 98.8154% ( 2) 00:15:40.415 11.615 - 11.662: 98.8228% ( 1) 00:15:40.415 11.757 - 11.804: 98.8377% ( 2) 00:15:40.415 11.994 - 12.041: 98.8452% ( 1) 00:15:40.415 12.136 - 12.231: 98.8526% ( 1) 00:15:40.415 12.326 - 12.421: 98.8601% ( 1) 00:15:40.415 12.610 - 12.705: 98.8675% ( 1) 00:15:40.415 12.705 - 12.800: 98.8750% ( 1) 00:15:40.415 12.800 - 12.895: 98.8824% ( 1) 00:15:40.415 12.990 - 13.084: 98.8973% ( 2) 00:15:40.415 13.274 - 13.369: 98.9122% ( 2) 00:15:40.415 13.369 - 13.464: 98.9197% ( 1) 00:15:40.415 13.464 - 13.559: 98.9271% ( 1) 00:15:40.415 13.559 - 13.653: 98.9346% ( 1) 00:15:40.415 13.653 - 13.748: 98.9495% ( 2) 00:15:40.415 13.843 - 13.938: 98.9569% ( 1) 00:15:40.415 14.033 - 14.127: 98.9644% ( 1) 00:15:40.415 14.412 - 14.507: 98.9718% ( 1) 00:15:40.415 14.601 - 14.696: 98.9793% ( 1) 00:15:40.415 14.696 - 14.791: 98.9867% ( 1) 00:15:40.415 14.791 - 14.886: 98.9942% ( 1) 00:15:40.415 14.886 - 14.981: 99.0016% ( 1) 00:15:40.415 14.981 - 15.076: 99.0165% ( 2) 00:15:40.415 15.834 - 15.929: 99.0240% ( 1) 00:15:40.415 17.161 - 17.256: 99.0389% ( 2) 00:15:40.415 17.256 - 17.351: 99.0612% ( 3) 00:15:40.415 17.446 - 17.541: 99.0910% ( 4) 00:15:40.415 17.541 - 17.636: 99.1432% ( 7) 00:15:40.415 17.636 - 17.730: 99.2103% ( 9) 00:15:40.415 17.730 - 17.825: 99.2550% ( 6) 00:15:40.415 17.825 - 17.920: 99.3146% ( 8) 00:15:40.415 17.920 - 18.015: 99.3667% ( 7) 00:15:40.415 18.015 - 18.110: 99.4040% ( 5) 00:15:40.415 18.110 - 18.204: 99.4785% ( 10) 00:15:40.415 18.204 - 18.299: 99.5455% ( 9) 00:15:40.415 18.299 - 18.394: 99.6200% ( 10) 00:15:40.415 18.394 - 18.489: 99.6647% ( 6) 00:15:40.415 18.489 - 18.584: 99.7094% ( 6) 00:15:40.415 18.584 - 18.679: 99.7243% ( 2) 00:15:40.415 18.679 - 18.773: 99.7616% ( 5) 00:15:40.415 18.773 - 18.868: 99.7914% ( 4) 00:15:40.415 18.868 - 18.963: 99.8063% ( 2) 00:15:40.415 18.963 - 19.058: 99.8212% ( 2) 00:15:40.415 19.058 - 19.153: 99.8286% ( 1) 00:15:40.415 19.153 - 19.247: 99.8361% ( 1) 00:15:40.415 19.247 - 19.342: 99.8510% ( 2) 00:15:40.415 19.342 - 19.437: 99.8584% ( 1) 00:15:40.415 20.764 - 20.859: 99.8659% ( 1) 00:15:40.415 22.850 - 22.945: 99.8733% ( 1) 00:15:40.415 25.979 - 26.169: 99.8808% ( 1) 00:15:40.415 27.117 - 27.307: 99.8882% ( 1) 00:15:40.415 28.444 - 28.634: 99.8957% ( 1) 00:15:40.415 32.806 - 32.996: 99.9031% ( 1) 00:15:40.415 3980.705 - 4004.978: 99.9925% ( 12) 00:15:40.415 4004.978 - 4029.250: 100.0000% ( 1) 00:15:40.415 00:15:40.415 Complete histogram 00:15:40.415 ================== 00:15:40.415 Range in us Cumulative Count 00:15:40.415 2.050 - 2.062: 0.0298% ( 4) 00:15:40.415 2.062 - 2.074: 22.8952% ( 3069) 00:15:40.415 2.074 - 2.086: 39.5098% ( 2230) 00:15:40.415 2.086 - 2.098: 41.5512% ( 274) 00:15:40.415 2.098 - 2.110: 57.5101% ( 2142) 00:15:40.415 2.110 - 2.121: 63.9249% ( 861) 00:15:40.415 2.121 - 2.133: 65.8620% ( 260) 00:15:40.415 2.133 - 2.145: 77.0824% ( 1506) 00:15:40.415 2.145 - 2.157: 80.3159% ( 434) 00:15:40.415 2.157 - 2.169: 82.1338% ( 244) 00:15:40.415 2.169 - 2.181: 87.5503% ( 727) 00:15:40.415 2.181 - 2.193: 89.2862% ( 233) 00:15:40.415 2.193 - 2.204: 89.8078% ( 70) 00:15:40.415 2.204 - 2.216: 91.4990% ( 227) 00:15:40.415 2.216 - 2.228: 93.2499% ( 235) 00:15:40.415 2.228 - 2.240: 94.3898% ( 153) 00:15:40.415 2.240 - 2.252: 95.1051% ( 96) 00:15:40.415 2.252 - 2.264: 95.3584% ( 34) 00:15:40.415 2.264 - 2.276: 95.4254% ( 9) 00:15:40.415 2.276 - 2.287: 95.5819% ( 21) 00:15:40.415 2.287 - 2.299: 95.8352% ( 34) 00:15:40.415 2.299 - 2.311: 96.0066% ( 23) 00:15:40.415 2.311 - 2.323: 96.0960% ( 12) 00:15:40.415 2.323 - 2.335: 96.1407% ( 6) 00:15:40.415 2.335 - 2.347: 96.3269% ( 25) 00:15:40.415 2.347 - 2.359: 96.6175% ( 39) 00:15:40.415 2.359 - 2.370: 97.0347% ( 56) 00:15:40.415 2.370 - 2.382: 97.4072% ( 50) 00:15:40.415 2.382 - 2.394: 97.7574% ( 47) 00:15:40.415 2.394 - 2.406: 97.9437% ( 25) 00:15:40.415 2.406 - 2.418: 98.0554% ( 15) 00:15:40.415 2.418 - 2.430: 98.1523% ( 13) 00:15:40.415 2.430 - 2.441: 98.1970% ( 6) 00:15:40.415 2.441 - 2.453: 98.2342% ( 5) 00:15:40.415 2.453 - 2.465: 98.2566% ( 3) 00:15:40.415 2.465 - 2.477: 98.2789% ( 3) 00:15:40.415 2.477 - 2.489: 98.2864% ( 1) 00:15:40.415 2.489 - 2.501: 98.3460% ( 8) 00:15:40.415 2.501 - 2.513: 98.3758% ( 4) 00:15:40.415 2.513 - 2.524: 98.3982% ( 3) 00:15:40.415 2.524 - 2.536: 98.4056% ( 1) 00:15:40.415 2.536 - 2.548: 98.4205% ( 2) 00:15:40.415 2.560 - 2.572: 98.4354% ( 2) 00:15:40.415 2.572 - 2.584: 98.4429% ( 1) 00:15:40.415 2.584 - 2.596: 98.4503% ( 1) 00:15:40.415 2.619 - 2.631: 98.4727% ( 3) 00:15:40.415 2.631 - 2.643: 98.4801% ( 1) 00:15:40.415 2.643 - 2.655: 98.4876% ( 1) 00:15:40.415 2.667 - 2.679: 98.4950% ( 1) 00:15:40.415 3.271 - 3.295: 98.5025% ( 1) 00:15:40.415 3.319 - 3.342: 98.5248% ( 3) 00:15:40.415 3.342 - 3.366: 98.5472% ( 3) 00:15:40.415 3.413 - 3.437: 98.5695% ( 3) 00:15:40.415 3.484 - 3.508: 98.5770% ( 1) 00:15:40.415 3.532 - 3.556: 98.5844% ( 1) 00:15:40.415 3.556 - 3.579: 98.5993% ( 2) 00:15:40.415 3.579 - 3.603: 98.6142% ( 2) 00:15:40.415 3.627 - 3.650: 98.6217% ( 1) 00:15:40.415 3.650 - 3.674: 98.6291% ( 1) 00:15:40.415 3.698 - 3.721: 98.6440% ( 2) 00:15:40.415 3.721 - 3.745: 98.6515% ( 1) 00:15:40.415 3.745 - 3.769: 98.6589% ( 1) 00:15:40.415 3.887 - 3.911: 98.6664% ( 1) 00:15:40.415 3.935 - 3.959: 98.6813% ( 2) 00:15:40.416 3.982 - 4.006: 98.6887% ( 1) 00:15:40.416 4.788 - 4.812: 98.6962% ( 1) 00:15:40.416 5.049 - 5.073: 9[2024-07-25 19:45:49.457847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.416 8.7036% ( 1) 00:15:40.416 5.167 - 5.191: 98.7111% ( 1) 00:15:40.416 5.191 - 5.215: 98.7185% ( 1) 00:15:40.416 5.333 - 5.357: 98.7334% ( 2) 00:15:40.416 5.357 - 5.381: 98.7409% ( 1) 00:15:40.416 5.428 - 5.452: 98.7558% ( 2) 00:15:40.416 5.452 - 5.476: 98.7632% ( 1) 00:15:40.416 5.641 - 5.665: 98.7707% ( 1) 00:15:40.416 5.736 - 5.760: 98.7781% ( 1) 00:15:40.416 5.784 - 5.807: 98.7856% ( 1) 00:15:40.416 5.807 - 5.831: 98.7930% ( 1) 00:15:40.416 6.447 - 6.495: 98.8005% ( 1) 00:15:40.416 6.495 - 6.542: 98.8079% ( 1) 00:15:40.416 6.542 - 6.590: 98.8154% ( 1) 00:15:40.416 6.590 - 6.637: 98.8228% ( 1) 00:15:40.416 6.684 - 6.732: 98.8303% ( 1) 00:15:40.416 7.396 - 7.443: 98.8377% ( 1) 00:15:40.416 7.490 - 7.538: 98.8452% ( 1) 00:15:40.416 10.714 - 10.761: 98.8526% ( 1) 00:15:40.416 11.425 - 11.473: 98.8601% ( 1) 00:15:40.416 15.550 - 15.644: 98.8824% ( 3) 00:15:40.416 15.644 - 15.739: 98.9122% ( 4) 00:15:40.416 15.929 - 16.024: 98.9420% ( 4) 00:15:40.416 16.024 - 16.119: 98.9718% ( 4) 00:15:40.416 16.119 - 16.213: 99.0091% ( 5) 00:15:40.416 16.213 - 16.308: 99.0389% ( 4) 00:15:40.416 16.308 - 16.403: 99.0687% ( 4) 00:15:40.416 16.403 - 16.498: 99.1208% ( 7) 00:15:40.416 16.498 - 16.593: 99.1655% ( 6) 00:15:40.416 16.593 - 16.687: 99.1879% ( 3) 00:15:40.416 16.687 - 16.782: 99.2326% ( 6) 00:15:40.416 16.782 - 16.877: 99.2848% ( 7) 00:15:40.416 16.877 - 16.972: 99.3071% ( 3) 00:15:40.416 16.972 - 17.067: 99.3146% ( 1) 00:15:40.416 17.067 - 17.161: 99.3295% ( 2) 00:15:40.416 17.161 - 17.256: 99.3444% ( 2) 00:15:40.416 17.351 - 17.446: 99.3518% ( 1) 00:15:40.416 17.541 - 17.636: 99.3593% ( 1) 00:15:40.416 17.730 - 17.825: 99.3667% ( 1) 00:15:40.416 17.825 - 17.920: 99.3742% ( 1) 00:15:40.416 18.394 - 18.489: 99.3816% ( 1) 00:15:40.416 21.902 - 21.997: 99.3891% ( 1) 00:15:40.416 3980.705 - 4004.978: 99.9329% ( 73) 00:15:40.416 4004.978 - 4029.250: 99.9925% ( 8) 00:15:40.416 4975.881 - 5000.154: 100.0000% ( 1) 00:15:40.416 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:40.416 [ 00:15:40.416 { 00:15:40.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.416 "subtype": "Discovery", 00:15:40.416 "listen_addresses": [], 00:15:40.416 "allow_any_host": true, 00:15:40.416 "hosts": [] 00:15:40.416 }, 00:15:40.416 { 00:15:40.416 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:40.416 "subtype": "NVMe", 00:15:40.416 "listen_addresses": [ 00:15:40.416 { 00:15:40.416 "trtype": "VFIOUSER", 00:15:40.416 "adrfam": "IPv4", 00:15:40.416 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:40.416 "trsvcid": "0" 00:15:40.416 } 00:15:40.416 ], 00:15:40.416 "allow_any_host": true, 00:15:40.416 "hosts": [], 00:15:40.416 "serial_number": "SPDK1", 00:15:40.416 "model_number": "SPDK bdev Controller", 00:15:40.416 "max_namespaces": 32, 00:15:40.416 "min_cntlid": 1, 00:15:40.416 "max_cntlid": 65519, 00:15:40.416 "namespaces": [ 00:15:40.416 { 00:15:40.416 "nsid": 1, 00:15:40.416 "bdev_name": "Malloc1", 00:15:40.416 "name": "Malloc1", 00:15:40.416 "nguid": "0A64A9325588468B890EC0FC0CB439AF", 00:15:40.416 "uuid": "0a64a932-5588-468b-890e-c0fc0cb439af" 00:15:40.416 }, 00:15:40.416 { 00:15:40.416 "nsid": 2, 00:15:40.416 "bdev_name": "Malloc3", 00:15:40.416 "name": "Malloc3", 00:15:40.416 "nguid": "1277611322E9480EA2C0DAB7AE792DFF", 00:15:40.416 "uuid": "12776113-22e9-480e-a2c0-dab7ae792dff" 00:15:40.416 } 00:15:40.416 ] 00:15:40.416 }, 00:15:40.416 { 00:15:40.416 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:40.416 "subtype": "NVMe", 00:15:40.416 "listen_addresses": [ 00:15:40.416 { 00:15:40.416 "trtype": "VFIOUSER", 00:15:40.416 "adrfam": "IPv4", 00:15:40.416 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:40.416 "trsvcid": "0" 00:15:40.416 } 00:15:40.416 ], 00:15:40.416 "allow_any_host": true, 00:15:40.416 "hosts": [], 00:15:40.416 "serial_number": "SPDK2", 00:15:40.416 "model_number": "SPDK bdev Controller", 00:15:40.416 "max_namespaces": 32, 00:15:40.416 "min_cntlid": 1, 00:15:40.416 "max_cntlid": 65519, 00:15:40.416 "namespaces": [ 00:15:40.416 { 00:15:40.416 "nsid": 1, 00:15:40.416 "bdev_name": "Malloc2", 00:15:40.416 "name": "Malloc2", 00:15:40.416 "nguid": "9C0AE9A3659A4863A2834738AE5DC717", 00:15:40.416 "uuid": "9c0ae9a3-659a-4863-a283-4738ae5dc717" 00:15:40.416 } 00:15:40.416 ] 00:15:40.416 } 00:15:40.416 ] 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3943172 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:40.416 19:45:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:40.416 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.674 [2024-07-25 19:45:49.931491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.674 Malloc4 00:15:40.674 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:40.931 [2024-07-25 19:45:50.292144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.931 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:40.931 Asynchronous Event Request test 00:15:40.931 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.931 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.931 Registering asynchronous event callbacks... 00:15:40.931 Starting namespace attribute notice tests for all controllers... 00:15:40.931 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:40.931 aer_cb - Changed Namespace 00:15:40.931 Cleaning up... 00:15:41.189 [ 00:15:41.189 { 00:15:41.189 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:41.189 "subtype": "Discovery", 00:15:41.189 "listen_addresses": [], 00:15:41.189 "allow_any_host": true, 00:15:41.189 "hosts": [] 00:15:41.189 }, 00:15:41.189 { 00:15:41.189 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:41.189 "subtype": "NVMe", 00:15:41.189 "listen_addresses": [ 00:15:41.189 { 00:15:41.189 "trtype": "VFIOUSER", 00:15:41.189 "adrfam": "IPv4", 00:15:41.189 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:41.189 "trsvcid": "0" 00:15:41.189 } 00:15:41.189 ], 00:15:41.189 "allow_any_host": true, 00:15:41.189 "hosts": [], 00:15:41.189 "serial_number": "SPDK1", 00:15:41.189 "model_number": "SPDK bdev Controller", 00:15:41.189 "max_namespaces": 32, 00:15:41.189 "min_cntlid": 1, 00:15:41.189 "max_cntlid": 65519, 00:15:41.189 "namespaces": [ 00:15:41.189 { 00:15:41.189 "nsid": 1, 00:15:41.189 "bdev_name": "Malloc1", 00:15:41.189 "name": "Malloc1", 00:15:41.189 "nguid": "0A64A9325588468B890EC0FC0CB439AF", 00:15:41.189 "uuid": "0a64a932-5588-468b-890e-c0fc0cb439af" 00:15:41.189 }, 00:15:41.189 { 00:15:41.189 "nsid": 2, 00:15:41.189 "bdev_name": "Malloc3", 00:15:41.189 "name": "Malloc3", 00:15:41.189 "nguid": "1277611322E9480EA2C0DAB7AE792DFF", 00:15:41.189 "uuid": "12776113-22e9-480e-a2c0-dab7ae792dff" 00:15:41.189 } 00:15:41.189 ] 00:15:41.189 }, 00:15:41.189 { 00:15:41.189 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:41.189 "subtype": "NVMe", 00:15:41.189 "listen_addresses": [ 00:15:41.189 { 00:15:41.189 "trtype": "VFIOUSER", 00:15:41.189 "adrfam": "IPv4", 00:15:41.189 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:41.189 "trsvcid": "0" 00:15:41.189 } 00:15:41.189 ], 00:15:41.189 "allow_any_host": true, 00:15:41.189 "hosts": [], 00:15:41.189 "serial_number": "SPDK2", 00:15:41.189 "model_number": "SPDK bdev Controller", 00:15:41.189 "max_namespaces": 32, 00:15:41.189 "min_cntlid": 1, 00:15:41.189 "max_cntlid": 65519, 00:15:41.189 "namespaces": [ 00:15:41.189 { 00:15:41.189 "nsid": 1, 00:15:41.189 "bdev_name": "Malloc2", 00:15:41.189 "name": "Malloc2", 00:15:41.189 "nguid": "9C0AE9A3659A4863A2834738AE5DC717", 00:15:41.189 "uuid": "9c0ae9a3-659a-4863-a283-4738ae5dc717" 00:15:41.189 }, 00:15:41.189 { 00:15:41.189 "nsid": 2, 00:15:41.189 "bdev_name": "Malloc4", 00:15:41.189 "name": "Malloc4", 00:15:41.189 "nguid": "271ECFA7E60F452DA75C5E05BA7B4236", 00:15:41.189 "uuid": "271ecfa7-e60f-452d-a75c-5e05ba7b4236" 00:15:41.189 } 00:15:41.189 ] 00:15:41.189 } 00:15:41.189 ] 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3943172 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3937082 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3937082 ']' 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3937082 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3937082 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3937082' 00:15:41.189 killing process with pid 3937082 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3937082 00:15:41.189 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3937082 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3943314 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3943314' 00:15:41.755 Process pid: 3943314 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3943314 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3943314 ']' 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:41.755 19:45:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:41.755 [2024-07-25 19:45:50.957432] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:41.755 [2024-07-25 19:45:50.958463] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:15:41.755 [2024-07-25 19:45:50.958519] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.755 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.755 [2024-07-25 19:45:51.022524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.755 [2024-07-25 19:45:51.112549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.755 [2024-07-25 19:45:51.112610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.755 [2024-07-25 19:45:51.112627] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.755 [2024-07-25 19:45:51.112641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.755 [2024-07-25 19:45:51.112653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.755 [2024-07-25 19:45:51.112734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.755 [2024-07-25 19:45:51.112781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.755 [2024-07-25 19:45:51.112874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.755 [2024-07-25 19:45:51.112876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.013 [2024-07-25 19:45:51.212755] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:42.013 [2024-07-25 19:45:51.212981] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:42.013 [2024-07-25 19:45:51.213279] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:42.013 [2024-07-25 19:45:51.213887] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:42.013 [2024-07-25 19:45:51.214159] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:42.013 19:45:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:42.013 19:45:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:42.013 19:45:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:42.944 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:43.203 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:43.203 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:43.203 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.203 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:43.203 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:43.463 Malloc1 00:15:43.463 19:45:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:43.721 19:45:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:43.978 19:45:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:44.235 19:45:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.235 19:45:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:44.235 19:45:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:44.492 Malloc2 00:15:44.749 19:45:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:45.006 19:45:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:45.263 19:45:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3943314 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3943314 ']' 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3943314 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943314 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943314' 00:15:45.521 killing process with pid 3943314 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3943314 00:15:45.521 19:45:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3943314 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:45.780 00:15:45.780 real 0m52.522s 00:15:45.780 user 3m27.281s 00:15:45.780 sys 0m4.438s 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:45.780 ************************************ 00:15:45.780 END TEST nvmf_vfio_user 00:15:45.780 ************************************ 00:15:45.780 19:45:55 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:45.780 19:45:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:45.780 19:45:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:45.780 19:45:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.780 ************************************ 00:15:45.780 START TEST nvmf_vfio_user_nvme_compliance 00:15:45.780 ************************************ 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:45.780 * Looking for test storage... 00:15:45.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3943833 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3943833' 00:15:45.780 Process pid: 3943833 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:45.780 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3943833 00:15:45.781 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3943833 ']' 00:15:45.781 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.781 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:45.781 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.781 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:45.781 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.781 [2024-07-25 19:45:55.176981] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:15:45.781 [2024-07-25 19:45:55.177134] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.781 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.039 [2024-07-25 19:45:55.237735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.039 [2024-07-25 19:45:55.321477] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.039 [2024-07-25 19:45:55.321535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.039 [2024-07-25 19:45:55.321564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.039 [2024-07-25 19:45:55.321576] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.039 [2024-07-25 19:45:55.321586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.039 [2024-07-25 19:45:55.321655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.039 [2024-07-25 19:45:55.321683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.039 [2024-07-25 19:45:55.321686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.039 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.039 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:46.039 19:45:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.412 malloc0 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:47.412 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.413 19:45:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:47.413 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.413 00:15:47.413 00:15:47.413 CUnit - A unit testing framework for C - Version 2.1-3 00:15:47.413 http://cunit.sourceforge.net/ 00:15:47.413 00:15:47.413 00:15:47.413 Suite: nvme_compliance 00:15:47.413 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 19:45:56.668567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.413 [2024-07-25 19:45:56.669986] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:47.413 [2024-07-25 19:45:56.670011] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:47.413 [2024-07-25 19:45:56.670024] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:47.413 [2024-07-25 19:45:56.671590] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.413 passed 00:15:47.413 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 19:45:56.756170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.413 [2024-07-25 19:45:56.759192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.413 passed 00:15:47.670 Test: admin_identify_ns ...[2024-07-25 19:45:56.847577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.670 [2024-07-25 19:45:56.907077] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:47.670 [2024-07-25 19:45:56.915076] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:47.670 [2024-07-25 19:45:56.936191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.670 passed 00:15:47.670 Test: admin_get_features_mandatory_features ...[2024-07-25 19:45:57.019899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.671 [2024-07-25 19:45:57.022920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.671 passed 00:15:47.928 Test: admin_get_features_optional_features ...[2024-07-25 19:45:57.107528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.928 [2024-07-25 19:45:57.110554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.928 passed 00:15:47.928 Test: admin_set_features_number_of_queues ...[2024-07-25 19:45:57.190520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.928 [2024-07-25 19:45:57.299161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.928 passed 00:15:48.186 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 19:45:57.382751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.186 [2024-07-25 19:45:57.385776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.186 passed 00:15:48.187 Test: admin_get_log_page_with_lpo ...[2024-07-25 19:45:57.464581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.187 [2024-07-25 19:45:57.537078] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:48.187 [2024-07-25 19:45:57.550137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.187 passed 00:15:48.444 Test: fabric_property_get ...[2024-07-25 19:45:57.633729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.444 [2024-07-25 19:45:57.635004] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:48.444 [2024-07-25 19:45:57.636750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.444 passed 00:15:48.444 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 19:45:57.720293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.444 [2024-07-25 19:45:57.721601] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:48.444 [2024-07-25 19:45:57.723323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.444 passed 00:15:48.444 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 19:45:57.806436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.702 [2024-07-25 19:45:57.889067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.702 [2024-07-25 19:45:57.905067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.702 [2024-07-25 19:45:57.910172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.702 passed 00:15:48.702 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 19:45:57.992762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.702 [2024-07-25 19:45:57.994052] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:48.702 [2024-07-25 19:45:57.995793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.702 passed 00:15:48.702 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 19:45:58.077575] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.960 [2024-07-25 19:45:58.154067] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:48.960 [2024-07-25 19:45:58.178068] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.960 [2024-07-25 19:45:58.183189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.960 passed 00:15:48.960 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 19:45:58.265772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.960 [2024-07-25 19:45:58.267084] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:48.960 [2024-07-25 19:45:58.267131] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:48.960 [2024-07-25 19:45:58.268796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.960 passed 00:15:48.960 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 19:45:58.349173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.218 [2024-07-25 19:45:58.443081] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:49.218 [2024-07-25 19:45:58.451067] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:49.218 [2024-07-25 19:45:58.459085] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:49.218 [2024-07-25 19:45:58.467070] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:49.218 [2024-07-25 19:45:58.496201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.218 passed 00:15:49.218 Test: admin_create_io_sq_verify_pc ...[2024-07-25 19:45:58.579560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.218 [2024-07-25 19:45:58.596087] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:49.218 [2024-07-25 19:45:58.613904] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.218 passed 00:15:49.476 Test: admin_create_io_qp_max_qps ...[2024-07-25 19:45:58.698499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.409 [2024-07-25 19:45:59.801076] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:50.974 [2024-07-25 19:46:00.177517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.974 passed 00:15:50.974 Test: admin_create_io_sq_shared_cq ...[2024-07-25 19:46:00.260565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.974 [2024-07-25 19:46:00.392089] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.233 [2024-07-25 19:46:00.429175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.233 passed 00:15:51.233 00:15:51.233 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.233 suites 1 1 n/a 0 0 00:15:51.233 tests 18 18 18 0 0 00:15:51.233 asserts 360 360 360 0 n/a 00:15:51.233 00:15:51.233 Elapsed time = 1.560 seconds 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3943833 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3943833 ']' 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3943833 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943833 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943833' 00:15:51.233 killing process with pid 3943833 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3943833 00:15:51.233 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3943833 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:51.491 00:15:51.491 real 0m5.676s 00:15:51.491 user 0m16.056s 00:15:51.491 sys 0m0.543s 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.491 ************************************ 00:15:51.491 END TEST nvmf_vfio_user_nvme_compliance 00:15:51.491 ************************************ 00:15:51.491 19:46:00 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:51.491 19:46:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:51.491 19:46:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.491 19:46:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.491 ************************************ 00:15:51.491 START TEST nvmf_vfio_user_fuzz 00:15:51.491 ************************************ 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:51.491 * Looking for test storage... 00:15:51.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.491 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3944605 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3944605' 00:15:51.492 Process pid: 3944605 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3944605 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3944605 ']' 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:51.492 19:46:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.749 19:46:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:51.749 19:46:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:51.749 19:46:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 malloc0 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:53.162 19:46:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:25.234 Fuzzing completed. Shutting down the fuzz application 00:16:25.234 00:16:25.234 Dumping successful admin opcodes: 00:16:25.234 8, 9, 10, 24, 00:16:25.234 Dumping successful io opcodes: 00:16:25.234 0, 00:16:25.234 NS: 0x200003a1ef00 I/O qp, Total commands completed: 563783, total successful commands: 2169, random_seed: 634904064 00:16:25.234 NS: 0x200003a1ef00 admin qp, Total commands completed: 139806, total successful commands: 1132, random_seed: 1370833792 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3944605 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3944605 ']' 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3944605 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3944605 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3944605' 00:16:25.234 killing process with pid 3944605 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3944605 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3944605 00:16:25.234 19:46:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:25.234 19:46:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:25.234 00:16:25.234 real 0m32.243s 00:16:25.234 user 0m31.189s 00:16:25.234 sys 0m29.485s 00:16:25.234 19:46:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:25.234 19:46:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.234 ************************************ 00:16:25.234 END TEST nvmf_vfio_user_fuzz 00:16:25.234 ************************************ 00:16:25.234 19:46:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:25.234 19:46:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:25.234 19:46:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:25.234 19:46:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.234 ************************************ 00:16:25.234 START TEST nvmf_host_management 00:16:25.234 ************************************ 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:25.234 * Looking for test storage... 00:16:25.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.234 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:25.235 19:46:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.804 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:25.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:25.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:25.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:25.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:16:25.805 00:16:25.805 --- 10.0.0.2 ping statistics --- 00:16:25.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.805 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:25.805 00:16:25.805 --- 10.0.0.1 ping statistics --- 00:16:25.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.805 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.805 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3949958 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3949958 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3949958 ']' 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.064 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.064 [2024-07-25 19:46:35.288329] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:26.064 [2024-07-25 19:46:35.288428] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.064 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.064 [2024-07-25 19:46:35.358836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.064 [2024-07-25 19:46:35.456894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.064 [2024-07-25 19:46:35.456948] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.064 [2024-07-25 19:46:35.456972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.064 [2024-07-25 19:46:35.456984] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.064 [2024-07-25 19:46:35.456994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.064 [2024-07-25 19:46:35.457038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.064 [2024-07-25 19:46:35.457091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.064 [2024-07-25 19:46:35.457121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.064 [2024-07-25 19:46:35.457124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.323 [2024-07-25 19:46:35.602889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.323 Malloc0 00:16:26.323 [2024-07-25 19:46:35.663600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3950118 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3950118 /var/tmp/bdevperf.sock 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3950118 ']' 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:26.323 { 00:16:26.323 "params": { 00:16:26.323 "name": "Nvme$subsystem", 00:16:26.323 "trtype": "$TEST_TRANSPORT", 00:16:26.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.323 "adrfam": "ipv4", 00:16:26.323 "trsvcid": "$NVMF_PORT", 00:16:26.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.323 "hdgst": ${hdgst:-false}, 00:16:26.323 "ddgst": ${ddgst:-false} 00:16:26.323 }, 00:16:26.323 "method": "bdev_nvme_attach_controller" 00:16:26.323 } 00:16:26.323 EOF 00:16:26.323 )") 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:26.323 19:46:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:26.323 "params": { 00:16:26.323 "name": "Nvme0", 00:16:26.323 "trtype": "tcp", 00:16:26.323 "traddr": "10.0.0.2", 00:16:26.323 "adrfam": "ipv4", 00:16:26.323 "trsvcid": "4420", 00:16:26.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:26.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:26.323 "hdgst": false, 00:16:26.323 "ddgst": false 00:16:26.323 }, 00:16:26.323 "method": "bdev_nvme_attach_controller" 00:16:26.323 }' 00:16:26.323 [2024-07-25 19:46:35.741055] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:26.323 [2024-07-25 19:46:35.741151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950118 ] 00:16:26.581 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.581 [2024-07-25 19:46:35.802001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.581 [2024-07-25 19:46:35.888919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.840 Running I/O for 10 seconds... 00:16:26.840 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.840 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:26.840 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:26.840 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.840 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:27.098 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:27.358 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:27.358 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=527 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 527 -ge 100 ']' 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.359 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 [2024-07-25 19:46:36.615984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.616917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44120 is same with the state(5) to be set 00:16:27.359 [2024-07-25 19:46:36.617052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-25 19:46:36.617100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.359 [2024-07-25 19:46:36.617132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-25 19:46:36.617150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.359 [2024-07-25 19:46:36.617168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-25 19:46:36.617184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.359 [2024-07-25 19:46:36.617202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-25 19:46:36.617218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.359 [2024-07-25 19:46:36.617236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-25 19:46:36.617251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.359 [2024-07-25 19:46:36.617269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-25 19:46:36.617285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.617970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.617985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.360 [2024-07-25 19:46:36.618634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.360 [2024-07-25 19:46:36.618650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.618978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.618995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.361 [2024-07-25 19:46:36.619156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.361 [2024-07-25 19:46:36.619291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dc110 is same with the state(5) to be set 00:16:27.361 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:27.361 [2024-07-25 19:46:36.619393] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27dc110 was disconnected and freed. reset controller. 00:16:27.361 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.361 [2024-07-25 19:46:36.619462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.361 [2024-07-25 19:46:36.619485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.361 [2024-07-25 19:46:36.619517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.361 [2024-07-25 19:46:36.619553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.361 [2024-07-25 19:46:36.619585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.361 [2024-07-25 19:46:36.619600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cb1e0 is same with the state(5) to be set 00:16:27.361 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:27.361 [2024-07-25 19:46:36.620744] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:27.361 task offset: 73728 on job bdev=Nvme0n1 fails 00:16:27.361 00:16:27.361 Latency(us) 00:16:27.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.361 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:27.361 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:27.361 Verification LBA range: start 0x0 length 0x400 00:16:27.361 Nvme0n1 : 0.39 1475.87 92.24 163.99 0.00 37891.03 6262.33 33593.27 00:16:27.361 =================================================================================================================== 00:16:27.361 Total : 1475.87 92.24 163.99 0.00 37891.03 6262.33 33593.27 00:16:27.361 [2024-07-25 19:46:36.622775] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.361 [2024-07-25 19:46:36.622813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cb1e0 (9): Bad file descriptor 00:16:27.361 19:46:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.361 19:46:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:27.361 [2024-07-25 19:46:36.629227] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3950118 00:16:28.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3950118) - No such process 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:28.294 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:28.294 { 00:16:28.294 "params": { 00:16:28.294 "name": "Nvme$subsystem", 00:16:28.294 "trtype": "$TEST_TRANSPORT", 00:16:28.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.294 "adrfam": "ipv4", 00:16:28.294 "trsvcid": "$NVMF_PORT", 00:16:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.295 "hdgst": ${hdgst:-false}, 00:16:28.295 "ddgst": ${ddgst:-false} 00:16:28.295 }, 00:16:28.295 "method": "bdev_nvme_attach_controller" 00:16:28.295 } 00:16:28.295 EOF 00:16:28.295 )") 00:16:28.295 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:28.295 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:28.295 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:28.295 19:46:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:28.295 "params": { 00:16:28.295 "name": "Nvme0", 00:16:28.295 "trtype": "tcp", 00:16:28.295 "traddr": "10.0.0.2", 00:16:28.295 "adrfam": "ipv4", 00:16:28.295 "trsvcid": "4420", 00:16:28.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:28.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:28.295 "hdgst": false, 00:16:28.295 "ddgst": false 00:16:28.295 }, 00:16:28.295 "method": "bdev_nvme_attach_controller" 00:16:28.295 }' 00:16:28.295 [2024-07-25 19:46:37.678358] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:28.295 [2024-07-25 19:46:37.678485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950308 ] 00:16:28.295 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.553 [2024-07-25 19:46:37.743085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.553 [2024-07-25 19:46:37.828830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.811 Running I/O for 1 seconds... 00:16:30.186 00:16:30.186 Latency(us) 00:16:30.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.186 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:30.186 Verification LBA range: start 0x0 length 0x400 00:16:30.186 Nvme0n1 : 1.02 1686.26 105.39 0.00 0.00 37334.23 4854.52 33204.91 00:16:30.186 =================================================================================================================== 00:16:30.186 Total : 1686.26 105.39 0.00 0.00 37334.23 4854.52 33204.91 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:30.186 rmmod nvme_tcp 00:16:30.186 rmmod nvme_fabrics 00:16:30.186 rmmod nvme_keyring 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3949958 ']' 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3949958 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3949958 ']' 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3949958 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3949958 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3949958' 00:16:30.186 killing process with pid 3949958 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3949958 00:16:30.186 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3949958 00:16:30.445 [2024-07-25 19:46:39.676478] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.445 19:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.345 19:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:32.345 19:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:32.345 00:16:32.345 real 0m8.670s 00:16:32.345 user 0m19.955s 00:16:32.345 sys 0m2.606s 00:16:32.345 19:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:32.345 19:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.345 ************************************ 00:16:32.345 END TEST nvmf_host_management 00:16:32.345 ************************************ 00:16:32.604 19:46:41 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:32.604 19:46:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:32.604 19:46:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:32.604 19:46:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:32.604 ************************************ 00:16:32.604 START TEST nvmf_lvol 00:16:32.604 ************************************ 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:32.604 * Looking for test storage... 00:16:32.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.604 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.605 19:46:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:34.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:34.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.505 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:34.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:34.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.506 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:16:34.764 00:16:34.764 --- 10.0.0.2 ping statistics --- 00:16:34.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.764 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:16:34.764 00:16:34.764 --- 10.0.0.1 ping statistics --- 00:16:34.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.764 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.764 19:46:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3952473 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3952473 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3952473 ']' 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:34.764 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:34.764 [2024-07-25 19:46:44.060197] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:34.764 [2024-07-25 19:46:44.060277] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.765 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.765 [2024-07-25 19:46:44.129284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.023 [2024-07-25 19:46:44.214008] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.023 [2024-07-25 19:46:44.214056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.023 [2024-07-25 19:46:44.214105] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.023 [2024-07-25 19:46:44.214117] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.023 [2024-07-25 19:46:44.214128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.023 [2024-07-25 19:46:44.214210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.023 [2024-07-25 19:46:44.214237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.023 [2024-07-25 19:46:44.214260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.023 19:46:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:35.281 [2024-07-25 19:46:44.568587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.281 19:46:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.539 19:46:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:35.539 19:46:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.797 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:35.797 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:36.055 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:36.314 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d6c43508-b5d2-49f7-b039-829a1964950c 00:16:36.314 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6c43508-b5d2-49f7-b039-829a1964950c lvol 20 00:16:36.573 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8c27bc42-5f13-4795-adca-1be98615de92 00:16:36.573 19:46:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:36.862 19:46:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c27bc42-5f13-4795-adca-1be98615de92 00:16:37.120 19:46:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:37.378 [2024-07-25 19:46:46.601553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.378 19:46:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.635 19:46:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3952895 00:16:37.635 19:46:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:37.635 19:46:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:37.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.568 19:46:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8c27bc42-5f13-4795-adca-1be98615de92 MY_SNAPSHOT 00:16:38.826 19:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=51593e80-8caa-4ff5-be38-6571d19685a4 00:16:38.826 19:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8c27bc42-5f13-4795-adca-1be98615de92 30 00:16:39.084 19:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 51593e80-8caa-4ff5-be38-6571d19685a4 MY_CLONE 00:16:39.341 19:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=387da1e9-dfaa-44cc-8638-3969e4fefcd0 00:16:39.341 19:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 387da1e9-dfaa-44cc-8638-3969e4fefcd0 00:16:40.275 19:46:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3952895 00:16:48.379 Initializing NVMe Controllers 00:16:48.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:48.379 Controller IO queue size 128, less than required. 00:16:48.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:48.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:48.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:48.379 Initialization complete. Launching workers. 00:16:48.379 ======================================================== 00:16:48.379 Latency(us) 00:16:48.379 Device Information : IOPS MiB/s Average min max 00:16:48.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10146.25 39.63 12619.67 1808.89 78702.07 00:16:48.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10627.93 41.52 12047.04 2015.33 72551.59 00:16:48.379 ======================================================== 00:16:48.379 Total : 20774.19 81.15 12326.72 1808.89 78702.07 00:16:48.379 00:16:48.379 19:46:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:48.379 19:46:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8c27bc42-5f13-4795-adca-1be98615de92 00:16:48.379 19:46:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6c43508-b5d2-49f7-b039-829a1964950c 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.637 rmmod nvme_tcp 00:16:48.637 rmmod nvme_fabrics 00:16:48.637 rmmod nvme_keyring 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3952473 ']' 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3952473 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3952473 ']' 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3952473 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:48.637 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3952473 00:16:48.895 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:48.895 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:48.895 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3952473' 00:16:48.895 killing process with pid 3952473 00:16:48.895 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3952473 00:16:48.895 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3952473 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.154 19:46:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.053 19:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.053 00:16:51.053 real 0m18.614s 00:16:51.053 user 1m3.313s 00:16:51.053 sys 0m5.621s 00:16:51.053 19:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.053 19:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:51.053 ************************************ 00:16:51.053 END TEST nvmf_lvol 00:16:51.053 ************************************ 00:16:51.053 19:47:00 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:51.053 19:47:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:51.053 19:47:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:51.053 19:47:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.053 ************************************ 00:16:51.053 START TEST nvmf_lvs_grow 00:16:51.053 ************************************ 00:16:51.053 19:47:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:51.312 * Looking for test storage... 00:16:51.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.312 19:47:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.313 19:47:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.216 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:53.217 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:53.217 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:53.217 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:53.217 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:16:53.217 00:16:53.217 --- 10.0.0.2 ping statistics --- 00:16:53.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.217 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:16:53.217 00:16:53.217 --- 10.0.0.1 ping statistics --- 00:16:53.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.217 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:53.217 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3956149 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3956149 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3956149 ']' 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:53.475 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.475 [2024-07-25 19:47:02.695160] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:53.475 [2024-07-25 19:47:02.695239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.475 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.475 [2024-07-25 19:47:02.758677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.475 [2024-07-25 19:47:02.841877] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.475 [2024-07-25 19:47:02.841927] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.476 [2024-07-25 19:47:02.841956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.476 [2024-07-25 19:47:02.841967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.476 [2024-07-25 19:47:02.841977] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.476 [2024-07-25 19:47:02.842011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.734 19:47:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.991 [2024-07-25 19:47:03.250191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.991 ************************************ 00:16:53.991 START TEST lvs_grow_clean 00:16:53.991 ************************************ 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:53.991 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:53.992 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:53.992 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.249 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:54.249 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:54.507 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7a7858e-e5eb-4eaa-b321-493246790590 00:16:54.507 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:16:54.507 19:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:54.765 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:54.765 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:54.765 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7a7858e-e5eb-4eaa-b321-493246790590 lvol 150 00:16:55.024 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=50bb997c-ae0f-4a68-ac82-2418a5dba27f 00:16:55.024 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.024 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:55.282 [2024-07-25 19:47:04.580191] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:55.283 [2024-07-25 19:47:04.580271] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:55.283 true 00:16:55.283 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:16:55.283 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:55.540 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:55.540 19:47:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:55.797 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50bb997c-ae0f-4a68-ac82-2418a5dba27f 00:16:56.055 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.313 [2024-07-25 19:47:05.595294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.313 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3956473 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3956473 /var/tmp/bdevperf.sock 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3956473 ']' 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.572 19:47:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:56.572 [2024-07-25 19:47:05.891528] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:56.572 [2024-07-25 19:47:05.891602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956473 ] 00:16:56.572 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.572 [2024-07-25 19:47:05.956864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.830 [2024-07-25 19:47:06.048817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.830 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.830 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:56.830 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:57.088 Nvme0n1 00:16:57.088 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:57.347 [ 00:16:57.347 { 00:16:57.347 "name": "Nvme0n1", 00:16:57.347 "aliases": [ 00:16:57.347 "50bb997c-ae0f-4a68-ac82-2418a5dba27f" 00:16:57.347 ], 00:16:57.347 "product_name": "NVMe disk", 00:16:57.347 "block_size": 4096, 00:16:57.347 "num_blocks": 38912, 00:16:57.347 "uuid": "50bb997c-ae0f-4a68-ac82-2418a5dba27f", 00:16:57.347 "assigned_rate_limits": { 00:16:57.347 "rw_ios_per_sec": 0, 00:16:57.347 "rw_mbytes_per_sec": 0, 00:16:57.347 "r_mbytes_per_sec": 0, 00:16:57.347 "w_mbytes_per_sec": 0 00:16:57.347 }, 00:16:57.347 "claimed": false, 00:16:57.347 "zoned": false, 00:16:57.347 "supported_io_types": { 00:16:57.347 "read": true, 00:16:57.347 "write": true, 00:16:57.347 "unmap": true, 00:16:57.347 "write_zeroes": true, 00:16:57.347 "flush": true, 00:16:57.347 "reset": true, 00:16:57.347 "compare": true, 00:16:57.347 "compare_and_write": true, 00:16:57.347 "abort": true, 00:16:57.347 "nvme_admin": true, 00:16:57.347 "nvme_io": true 00:16:57.347 }, 00:16:57.347 "memory_domains": [ 00:16:57.347 { 00:16:57.347 "dma_device_id": "system", 00:16:57.347 "dma_device_type": 1 00:16:57.347 } 00:16:57.347 ], 00:16:57.347 "driver_specific": { 00:16:57.347 "nvme": [ 00:16:57.347 { 00:16:57.347 "trid": { 00:16:57.347 "trtype": "TCP", 00:16:57.347 "adrfam": "IPv4", 00:16:57.347 "traddr": "10.0.0.2", 00:16:57.347 "trsvcid": "4420", 00:16:57.347 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:57.347 }, 00:16:57.347 "ctrlr_data": { 00:16:57.347 "cntlid": 1, 00:16:57.347 "vendor_id": "0x8086", 00:16:57.347 "model_number": "SPDK bdev Controller", 00:16:57.347 "serial_number": "SPDK0", 00:16:57.347 "firmware_revision": "24.05.1", 00:16:57.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.347 "oacs": { 00:16:57.347 "security": 0, 00:16:57.347 "format": 0, 00:16:57.347 "firmware": 0, 00:16:57.347 "ns_manage": 0 00:16:57.347 }, 00:16:57.347 "multi_ctrlr": true, 00:16:57.347 "ana_reporting": false 00:16:57.347 }, 00:16:57.347 "vs": { 00:16:57.347 "nvme_version": "1.3" 00:16:57.347 }, 00:16:57.347 "ns_data": { 00:16:57.347 "id": 1, 00:16:57.347 "can_share": true 00:16:57.347 } 00:16:57.347 } 00:16:57.347 ], 00:16:57.347 "mp_policy": "active_passive" 00:16:57.347 } 00:16:57.347 } 00:16:57.347 ] 00:16:57.347 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3956604 00:16:57.347 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:57.347 19:47:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.605 Running I/O for 10 seconds... 00:16:58.541 Latency(us) 00:16:58.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.541 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:16:58.541 =================================================================================================================== 00:16:58.541 Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:16:58.541 00:16:59.502 19:47:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7a7858e-e5eb-4eaa-b321-493246790590 00:16:59.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.502 Nvme0n1 : 2.00 14988.00 58.55 0.00 0.00 0.00 0.00 0.00 00:16:59.502 =================================================================================================================== 00:16:59.502 Total : 14988.00 58.55 0.00 0.00 0.00 0.00 0.00 00:16:59.502 00:16:59.760 true 00:16:59.760 19:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:16:59.760 19:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:00.019 19:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:00.019 19:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:00.019 19:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3956604 00:17:00.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.585 Nvme0n1 : 3.00 14973.67 58.49 0.00 0.00 0.00 0.00 0.00 00:17:00.585 =================================================================================================================== 00:17:00.585 Total : 14973.67 58.49 0.00 0.00 0.00 0.00 0.00 00:17:00.585 00:17:01.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.523 Nvme0n1 : 4.00 15103.75 59.00 0.00 0.00 0.00 0.00 0.00 00:17:01.523 =================================================================================================================== 00:17:01.523 Total : 15103.75 59.00 0.00 0.00 0.00 0.00 0.00 00:17:01.523 00:17:02.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.459 Nvme0n1 : 5.00 15080.20 58.91 0.00 0.00 0.00 0.00 0.00 00:17:02.459 =================================================================================================================== 00:17:02.459 Total : 15080.20 58.91 0.00 0.00 0.00 0.00 0.00 00:17:02.459 00:17:03.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.836 Nvme0n1 : 6.00 15067.33 58.86 0.00 0.00 0.00 0.00 0.00 00:17:03.836 =================================================================================================================== 00:17:03.836 Total : 15067.33 58.86 0.00 0.00 0.00 0.00 0.00 00:17:03.836 00:17:04.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.778 Nvme0n1 : 7.00 15055.71 58.81 0.00 0.00 0.00 0.00 0.00 00:17:04.778 =================================================================================================================== 00:17:04.778 Total : 15055.71 58.81 0.00 0.00 0.00 0.00 0.00 00:17:04.778 00:17:05.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.714 Nvme0n1 : 8.00 15062.88 58.84 0.00 0.00 0.00 0.00 0.00 00:17:05.714 =================================================================================================================== 00:17:05.714 Total : 15062.88 58.84 0.00 0.00 0.00 0.00 0.00 00:17:05.714 00:17:06.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.652 Nvme0n1 : 9.00 15054.33 58.81 0.00 0.00 0.00 0.00 0.00 00:17:06.652 =================================================================================================================== 00:17:06.652 Total : 15054.33 58.81 0.00 0.00 0.00 0.00 0.00 00:17:06.652 00:17:07.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.590 Nvme0n1 : 10.00 15060.20 58.83 0.00 0.00 0.00 0.00 0.00 00:17:07.590 =================================================================================================================== 00:17:07.590 Total : 15060.20 58.83 0.00 0.00 0.00 0.00 0.00 00:17:07.590 00:17:07.590 00:17:07.590 Latency(us) 00:17:07.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.590 Nvme0n1 : 10.00 15066.48 58.85 0.00 0.00 8490.58 4636.07 16699.54 00:17:07.590 =================================================================================================================== 00:17:07.590 Total : 15066.48 58.85 0.00 0.00 8490.58 4636.07 16699.54 00:17:07.590 0 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3956473 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3956473 ']' 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3956473 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3956473 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3956473' 00:17:07.590 killing process with pid 3956473 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3956473 00:17:07.590 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.590 00:17:07.590 Latency(us) 00:17:07.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.590 =================================================================================================================== 00:17:07.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.590 19:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3956473 00:17:07.849 19:47:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.107 19:47:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.364 19:47:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:08.364 19:47:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:08.623 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:08.623 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:08.623 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.881 [2024-07-25 19:47:18.265951] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:08.881 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:09.139 request: 00:17:09.139 { 00:17:09.139 "uuid": "c7a7858e-e5eb-4eaa-b321-493246790590", 00:17:09.139 "method": "bdev_lvol_get_lvstores", 00:17:09.139 "req_id": 1 00:17:09.139 } 00:17:09.139 Got JSON-RPC error response 00:17:09.139 response: 00:17:09.139 { 00:17:09.139 "code": -19, 00:17:09.139 "message": "No such device" 00:17:09.139 } 00:17:09.139 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:09.139 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.139 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.139 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.139 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.399 aio_bdev 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 50bb997c-ae0f-4a68-ac82-2418a5dba27f 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=50bb997c-ae0f-4a68-ac82-2418a5dba27f 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:09.399 19:47:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.659 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 50bb997c-ae0f-4a68-ac82-2418a5dba27f -t 2000 00:17:09.919 [ 00:17:09.919 { 00:17:09.919 "name": "50bb997c-ae0f-4a68-ac82-2418a5dba27f", 00:17:09.919 "aliases": [ 00:17:09.919 "lvs/lvol" 00:17:09.919 ], 00:17:09.919 "product_name": "Logical Volume", 00:17:09.919 "block_size": 4096, 00:17:09.919 "num_blocks": 38912, 00:17:09.919 "uuid": "50bb997c-ae0f-4a68-ac82-2418a5dba27f", 00:17:09.919 "assigned_rate_limits": { 00:17:09.919 "rw_ios_per_sec": 0, 00:17:09.919 "rw_mbytes_per_sec": 0, 00:17:09.919 "r_mbytes_per_sec": 0, 00:17:09.919 "w_mbytes_per_sec": 0 00:17:09.919 }, 00:17:09.919 "claimed": false, 00:17:09.919 "zoned": false, 00:17:09.919 "supported_io_types": { 00:17:09.919 "read": true, 00:17:09.919 "write": true, 00:17:09.919 "unmap": true, 00:17:09.919 "write_zeroes": true, 00:17:09.919 "flush": false, 00:17:09.919 "reset": true, 00:17:09.919 "compare": false, 00:17:09.919 "compare_and_write": false, 00:17:09.919 "abort": false, 00:17:09.919 "nvme_admin": false, 00:17:09.919 "nvme_io": false 00:17:09.919 }, 00:17:09.919 "driver_specific": { 00:17:09.919 "lvol": { 00:17:09.919 "lvol_store_uuid": "c7a7858e-e5eb-4eaa-b321-493246790590", 00:17:09.919 "base_bdev": "aio_bdev", 00:17:09.919 "thin_provision": false, 00:17:09.919 "num_allocated_clusters": 38, 00:17:09.919 "snapshot": false, 00:17:09.919 "clone": false, 00:17:09.919 "esnap_clone": false 00:17:09.919 } 00:17:09.919 } 00:17:09.919 } 00:17:09.919 ] 00:17:09.919 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:09.919 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:09.919 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:10.178 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:10.178 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:10.178 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:10.436 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:10.436 19:47:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50bb997c-ae0f-4a68-ac82-2418a5dba27f 00:17:10.694 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7a7858e-e5eb-4eaa-b321-493246790590 00:17:10.952 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.210 00:17:11.210 real 0m17.306s 00:17:11.210 user 0m16.673s 00:17:11.210 sys 0m1.868s 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.210 ************************************ 00:17:11.210 END TEST lvs_grow_clean 00:17:11.210 ************************************ 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.210 19:47:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.468 ************************************ 00:17:11.468 START TEST lvs_grow_dirty 00:17:11.468 ************************************ 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.468 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.728 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:11.728 19:47:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:11.986 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:11.986 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:11.986 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:12.244 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:12.244 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:12.244 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 lvol 150 00:17:12.510 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:12.510 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:12.510 19:47:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:12.771 [2024-07-25 19:47:22.068589] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:12.771 [2024-07-25 19:47:22.068673] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:12.771 true 00:17:12.772 19:47:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:12.772 19:47:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:13.027 19:47:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:13.027 19:47:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:13.285 19:47:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:13.543 19:47:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.800 [2024-07-25 19:47:23.183953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.800 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3958654 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3958654 /var/tmp/bdevperf.sock 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3958654 ']' 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:14.371 [2024-07-25 19:47:23.530193] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:14.371 [2024-07-25 19:47:23.530266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958654 ] 00:17:14.371 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.371 [2024-07-25 19:47:23.592292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.371 [2024-07-25 19:47:23.682747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:14.371 19:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:14.972 Nvme0n1 00:17:14.972 19:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:15.229 [ 00:17:15.229 { 00:17:15.229 "name": "Nvme0n1", 00:17:15.229 "aliases": [ 00:17:15.229 "432dd71b-98b1-426a-9a97-9bc4bd1f4ef8" 00:17:15.229 ], 00:17:15.229 "product_name": "NVMe disk", 00:17:15.229 "block_size": 4096, 00:17:15.229 "num_blocks": 38912, 00:17:15.229 "uuid": "432dd71b-98b1-426a-9a97-9bc4bd1f4ef8", 00:17:15.229 "assigned_rate_limits": { 00:17:15.229 "rw_ios_per_sec": 0, 00:17:15.229 "rw_mbytes_per_sec": 0, 00:17:15.229 "r_mbytes_per_sec": 0, 00:17:15.229 "w_mbytes_per_sec": 0 00:17:15.229 }, 00:17:15.229 "claimed": false, 00:17:15.229 "zoned": false, 00:17:15.229 "supported_io_types": { 00:17:15.229 "read": true, 00:17:15.229 "write": true, 00:17:15.229 "unmap": true, 00:17:15.229 "write_zeroes": true, 00:17:15.229 "flush": true, 00:17:15.229 "reset": true, 00:17:15.229 "compare": true, 00:17:15.229 "compare_and_write": true, 00:17:15.229 "abort": true, 00:17:15.229 "nvme_admin": true, 00:17:15.229 "nvme_io": true 00:17:15.229 }, 00:17:15.229 "memory_domains": [ 00:17:15.229 { 00:17:15.229 "dma_device_id": "system", 00:17:15.229 "dma_device_type": 1 00:17:15.229 } 00:17:15.229 ], 00:17:15.229 "driver_specific": { 00:17:15.229 "nvme": [ 00:17:15.229 { 00:17:15.229 "trid": { 00:17:15.229 "trtype": "TCP", 00:17:15.229 "adrfam": "IPv4", 00:17:15.229 "traddr": "10.0.0.2", 00:17:15.229 "trsvcid": "4420", 00:17:15.229 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:15.229 }, 00:17:15.229 "ctrlr_data": { 00:17:15.229 "cntlid": 1, 00:17:15.229 "vendor_id": "0x8086", 00:17:15.229 "model_number": "SPDK bdev Controller", 00:17:15.229 "serial_number": "SPDK0", 00:17:15.230 "firmware_revision": "24.05.1", 00:17:15.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:15.230 "oacs": { 00:17:15.230 "security": 0, 00:17:15.230 "format": 0, 00:17:15.230 "firmware": 0, 00:17:15.230 "ns_manage": 0 00:17:15.230 }, 00:17:15.230 "multi_ctrlr": true, 00:17:15.230 "ana_reporting": false 00:17:15.230 }, 00:17:15.230 "vs": { 00:17:15.230 "nvme_version": "1.3" 00:17:15.230 }, 00:17:15.230 "ns_data": { 00:17:15.230 "id": 1, 00:17:15.230 "can_share": true 00:17:15.230 } 00:17:15.230 } 00:17:15.230 ], 00:17:15.230 "mp_policy": "active_passive" 00:17:15.230 } 00:17:15.230 } 00:17:15.230 ] 00:17:15.230 19:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3958787 00:17:15.230 19:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:15.230 19:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.230 Running I/O for 10 seconds... 00:17:16.166 Latency(us) 00:17:16.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.166 Nvme0n1 : 1.00 15649.00 61.13 0.00 0.00 0.00 0.00 0.00 00:17:16.166 =================================================================================================================== 00:17:16.166 Total : 15649.00 61.13 0.00 0.00 0.00 0.00 0.00 00:17:16.166 00:17:17.100 19:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:17.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.356 Nvme0n1 : 2.00 15732.50 61.46 0.00 0.00 0.00 0.00 0.00 00:17:17.356 =================================================================================================================== 00:17:17.356 Total : 15732.50 61.46 0.00 0.00 0.00 0.00 0.00 00:17:17.356 00:17:17.356 true 00:17:17.356 19:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:17.356 19:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:17.615 19:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:17.615 19:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:17.615 19:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3958787 00:17:18.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.182 Nvme0n1 : 3.00 15505.33 60.57 0.00 0.00 0.00 0.00 0.00 00:17:18.182 =================================================================================================================== 00:17:18.182 Total : 15505.33 60.57 0.00 0.00 0.00 0.00 0.00 00:17:18.182 00:17:19.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.559 Nvme0n1 : 4.00 15375.50 60.06 0.00 0.00 0.00 0.00 0.00 00:17:19.560 =================================================================================================================== 00:17:19.560 Total : 15375.50 60.06 0.00 0.00 0.00 0.00 0.00 00:17:19.560 00:17:20.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.497 Nvme0n1 : 5.00 15412.20 60.20 0.00 0.00 0.00 0.00 0.00 00:17:20.497 =================================================================================================================== 00:17:20.497 Total : 15412.20 60.20 0.00 0.00 0.00 0.00 0.00 00:17:20.497 00:17:21.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.434 Nvme0n1 : 6.00 15365.17 60.02 0.00 0.00 0.00 0.00 0.00 00:17:21.434 =================================================================================================================== 00:17:21.434 Total : 15365.17 60.02 0.00 0.00 0.00 0.00 0.00 00:17:21.434 00:17:22.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.371 Nvme0n1 : 7.00 15329.14 59.88 0.00 0.00 0.00 0.00 0.00 00:17:22.371 =================================================================================================================== 00:17:22.371 Total : 15329.14 59.88 0.00 0.00 0.00 0.00 0.00 00:17:22.371 00:17:23.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.315 Nvme0n1 : 8.00 15341.88 59.93 0.00 0.00 0.00 0.00 0.00 00:17:23.315 =================================================================================================================== 00:17:23.315 Total : 15341.88 59.93 0.00 0.00 0.00 0.00 0.00 00:17:23.315 00:17:24.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.253 Nvme0n1 : 9.00 15351.78 59.97 0.00 0.00 0.00 0.00 0.00 00:17:24.253 =================================================================================================================== 00:17:24.253 Total : 15351.78 59.97 0.00 0.00 0.00 0.00 0.00 00:17:24.253 00:17:25.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.190 Nvme0n1 : 10.00 15397.90 60.15 0.00 0.00 0.00 0.00 0.00 00:17:25.190 =================================================================================================================== 00:17:25.190 Total : 15397.90 60.15 0.00 0.00 0.00 0.00 0.00 00:17:25.190 00:17:25.190 00:17:25.190 Latency(us) 00:17:25.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.190 Nvme0n1 : 10.00 15403.42 60.17 0.00 0.00 8305.09 3689.43 15631.55 00:17:25.190 =================================================================================================================== 00:17:25.190 Total : 15403.42 60.17 0.00 0.00 8305.09 3689.43 15631.55 00:17:25.190 0 00:17:25.190 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3958654 00:17:25.190 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3958654 ']' 00:17:25.190 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3958654 00:17:25.190 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:25.190 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.190 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3958654 00:17:25.448 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:25.448 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:25.448 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3958654' 00:17:25.448 killing process with pid 3958654 00:17:25.448 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3958654 00:17:25.448 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.448 00:17:25.448 Latency(us) 00:17:25.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.448 =================================================================================================================== 00:17:25.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.448 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3958654 00:17:25.448 19:47:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:26.015 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:26.274 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:26.274 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3956149 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3956149 00:17:26.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3956149 Killed "${NVMF_APP[@]}" "$@" 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3960119 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3960119 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3960119 ']' 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:26.532 19:47:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.532 [2024-07-25 19:47:35.778669] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:26.532 [2024-07-25 19:47:35.778757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.532 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.532 [2024-07-25 19:47:35.843546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.532 [2024-07-25 19:47:35.926341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.532 [2024-07-25 19:47:35.926396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.532 [2024-07-25 19:47:35.926431] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.532 [2024-07-25 19:47:35.926443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.532 [2024-07-25 19:47:35.926453] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.532 [2024-07-25 19:47:35.926479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.790 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.048 [2024-07-25 19:47:36.279693] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:27.048 [2024-07-25 19:47:36.279830] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:27.048 [2024-07-25 19:47:36.279887] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:27.048 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:27.306 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 -t 2000 00:17:27.564 [ 00:17:27.564 { 00:17:27.564 "name": "432dd71b-98b1-426a-9a97-9bc4bd1f4ef8", 00:17:27.564 "aliases": [ 00:17:27.564 "lvs/lvol" 00:17:27.565 ], 00:17:27.565 "product_name": "Logical Volume", 00:17:27.565 "block_size": 4096, 00:17:27.565 "num_blocks": 38912, 00:17:27.565 "uuid": "432dd71b-98b1-426a-9a97-9bc4bd1f4ef8", 00:17:27.565 "assigned_rate_limits": { 00:17:27.565 "rw_ios_per_sec": 0, 00:17:27.565 "rw_mbytes_per_sec": 0, 00:17:27.565 "r_mbytes_per_sec": 0, 00:17:27.565 "w_mbytes_per_sec": 0 00:17:27.565 }, 00:17:27.565 "claimed": false, 00:17:27.565 "zoned": false, 00:17:27.565 "supported_io_types": { 00:17:27.565 "read": true, 00:17:27.565 "write": true, 00:17:27.565 "unmap": true, 00:17:27.565 "write_zeroes": true, 00:17:27.565 "flush": false, 00:17:27.565 "reset": true, 00:17:27.565 "compare": false, 00:17:27.565 "compare_and_write": false, 00:17:27.565 "abort": false, 00:17:27.565 "nvme_admin": false, 00:17:27.565 "nvme_io": false 00:17:27.565 }, 00:17:27.565 "driver_specific": { 00:17:27.565 "lvol": { 00:17:27.565 "lvol_store_uuid": "8ce738b1-164d-4309-a4da-e50ce3d8bd77", 00:17:27.565 "base_bdev": "aio_bdev", 00:17:27.565 "thin_provision": false, 00:17:27.565 "num_allocated_clusters": 38, 00:17:27.565 "snapshot": false, 00:17:27.565 "clone": false, 00:17:27.565 "esnap_clone": false 00:17:27.565 } 00:17:27.565 } 00:17:27.565 } 00:17:27.565 ] 00:17:27.565 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:27.565 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:27.565 19:47:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:27.823 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:27.823 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:27.823 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:28.080 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:28.080 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:28.339 [2024-07-25 19:47:37.512617] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:28.339 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:28.598 request: 00:17:28.598 { 00:17:28.598 "uuid": "8ce738b1-164d-4309-a4da-e50ce3d8bd77", 00:17:28.598 "method": "bdev_lvol_get_lvstores", 00:17:28.598 "req_id": 1 00:17:28.598 } 00:17:28.598 Got JSON-RPC error response 00:17:28.598 response: 00:17:28.598 { 00:17:28.598 "code": -19, 00:17:28.598 "message": "No such device" 00:17:28.598 } 00:17:28.598 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:28.598 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.598 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.598 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.598 19:47:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:28.859 aio_bdev 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:28.859 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.118 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 -t 2000 00:17:29.376 [ 00:17:29.376 { 00:17:29.376 "name": "432dd71b-98b1-426a-9a97-9bc4bd1f4ef8", 00:17:29.376 "aliases": [ 00:17:29.376 "lvs/lvol" 00:17:29.376 ], 00:17:29.376 "product_name": "Logical Volume", 00:17:29.376 "block_size": 4096, 00:17:29.376 "num_blocks": 38912, 00:17:29.376 "uuid": "432dd71b-98b1-426a-9a97-9bc4bd1f4ef8", 00:17:29.376 "assigned_rate_limits": { 00:17:29.376 "rw_ios_per_sec": 0, 00:17:29.376 "rw_mbytes_per_sec": 0, 00:17:29.376 "r_mbytes_per_sec": 0, 00:17:29.376 "w_mbytes_per_sec": 0 00:17:29.376 }, 00:17:29.376 "claimed": false, 00:17:29.376 "zoned": false, 00:17:29.376 "supported_io_types": { 00:17:29.376 "read": true, 00:17:29.376 "write": true, 00:17:29.376 "unmap": true, 00:17:29.376 "write_zeroes": true, 00:17:29.376 "flush": false, 00:17:29.376 "reset": true, 00:17:29.376 "compare": false, 00:17:29.376 "compare_and_write": false, 00:17:29.376 "abort": false, 00:17:29.376 "nvme_admin": false, 00:17:29.376 "nvme_io": false 00:17:29.376 }, 00:17:29.376 "driver_specific": { 00:17:29.376 "lvol": { 00:17:29.376 "lvol_store_uuid": "8ce738b1-164d-4309-a4da-e50ce3d8bd77", 00:17:29.376 "base_bdev": "aio_bdev", 00:17:29.376 "thin_provision": false, 00:17:29.376 "num_allocated_clusters": 38, 00:17:29.376 "snapshot": false, 00:17:29.376 "clone": false, 00:17:29.376 "esnap_clone": false 00:17:29.376 } 00:17:29.376 } 00:17:29.376 } 00:17:29.376 ] 00:17:29.376 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:29.376 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:29.376 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:29.634 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:29.634 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:29.634 19:47:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:29.894 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:29.894 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 432dd71b-98b1-426a-9a97-9bc4bd1f4ef8 00:17:29.894 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ce738b1-164d-4309-a4da-e50ce3d8bd77 00:17:30.153 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:30.413 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.672 00:17:30.672 real 0m19.195s 00:17:30.672 user 0m48.467s 00:17:30.672 sys 0m4.745s 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:30.672 ************************************ 00:17:30.672 END TEST lvs_grow_dirty 00:17:30.672 ************************************ 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:30.672 nvmf_trace.0 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.672 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.673 rmmod nvme_tcp 00:17:30.673 rmmod nvme_fabrics 00:17:30.673 rmmod nvme_keyring 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3960119 ']' 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3960119 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3960119 ']' 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3960119 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.673 19:47:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3960119 00:17:30.673 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:30.673 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:30.673 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3960119' 00:17:30.673 killing process with pid 3960119 00:17:30.673 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3960119 00:17:30.673 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3960119 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.931 19:47:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.863 19:47:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.863 00:17:32.863 real 0m41.814s 00:17:32.863 user 1m10.738s 00:17:32.863 sys 0m8.478s 00:17:32.863 19:47:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.863 19:47:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:32.863 ************************************ 00:17:32.863 END TEST nvmf_lvs_grow 00:17:32.863 ************************************ 00:17:33.122 19:47:42 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:33.122 19:47:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:33.122 19:47:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.122 19:47:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 ************************************ 00:17:33.122 START TEST nvmf_bdev_io_wait 00:17:33.122 ************************************ 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:33.122 * Looking for test storage... 00:17:33.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.122 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.123 19:47:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.029 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:35.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:35.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:35.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:35.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:17:35.030 00:17:35.030 --- 10.0.0.2 ping statistics --- 00:17:35.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.030 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:17:35.030 00:17:35.030 --- 10.0.0.1 ping statistics --- 00:17:35.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.030 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3962511 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3962511 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3962511 ']' 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:35.030 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.030 [2024-07-25 19:47:44.366741] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:35.030 [2024-07-25 19:47:44.366824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.030 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.030 [2024-07-25 19:47:44.434535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.290 [2024-07-25 19:47:44.528017] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.291 [2024-07-25 19:47:44.528074] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.291 [2024-07-25 19:47:44.528092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.291 [2024-07-25 19:47:44.528107] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.291 [2024-07-25 19:47:44.528120] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.291 [2024-07-25 19:47:44.528187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.291 [2024-07-25 19:47:44.528244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.291 [2024-07-25 19:47:44.528846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.291 [2024-07-25 19:47:44.528857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.291 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 [2024-07-25 19:47:44.720254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.550 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.550 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.550 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.550 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.550 Malloc0 00:17:35.550 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.551 [2024-07-25 19:47:44.785819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3962660 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3962661 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3962663 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:35.551 { 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme$subsystem", 00:17:35.551 "trtype": "$TEST_TRANSPORT", 00:17:35.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "$NVMF_PORT", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.551 "hdgst": ${hdgst:-false}, 00:17:35.551 "ddgst": ${ddgst:-false} 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 } 00:17:35.551 EOF 00:17:35.551 )") 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3962666 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:35.551 { 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme$subsystem", 00:17:35.551 "trtype": "$TEST_TRANSPORT", 00:17:35.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "$NVMF_PORT", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.551 "hdgst": ${hdgst:-false}, 00:17:35.551 "ddgst": ${ddgst:-false} 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 } 00:17:35.551 EOF 00:17:35.551 )") 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:35.551 { 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme$subsystem", 00:17:35.551 "trtype": "$TEST_TRANSPORT", 00:17:35.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "$NVMF_PORT", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.551 "hdgst": ${hdgst:-false}, 00:17:35.551 "ddgst": ${ddgst:-false} 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 } 00:17:35.551 EOF 00:17:35.551 )") 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:35.551 { 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme$subsystem", 00:17:35.551 "trtype": "$TEST_TRANSPORT", 00:17:35.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "$NVMF_PORT", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.551 "hdgst": ${hdgst:-false}, 00:17:35.551 "ddgst": ${ddgst:-false} 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 } 00:17:35.551 EOF 00:17:35.551 )") 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3962660 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme1", 00:17:35.551 "trtype": "tcp", 00:17:35.551 "traddr": "10.0.0.2", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "4420", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.551 "hdgst": false, 00:17:35.551 "ddgst": false 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 }' 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme1", 00:17:35.551 "trtype": "tcp", 00:17:35.551 "traddr": "10.0.0.2", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "4420", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.551 "hdgst": false, 00:17:35.551 "ddgst": false 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 }' 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme1", 00:17:35.551 "trtype": "tcp", 00:17:35.551 "traddr": "10.0.0.2", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "4420", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.551 "hdgst": false, 00:17:35.551 "ddgst": false 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 }' 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:35.551 19:47:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:35.551 "params": { 00:17:35.551 "name": "Nvme1", 00:17:35.551 "trtype": "tcp", 00:17:35.551 "traddr": "10.0.0.2", 00:17:35.551 "adrfam": "ipv4", 00:17:35.551 "trsvcid": "4420", 00:17:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.551 "hdgst": false, 00:17:35.551 "ddgst": false 00:17:35.551 }, 00:17:35.551 "method": "bdev_nvme_attach_controller" 00:17:35.551 }' 00:17:35.551 [2024-07-25 19:47:44.833799] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:35.551 [2024-07-25 19:47:44.833843] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:35.551 [2024-07-25 19:47:44.833843] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:35.551 [2024-07-25 19:47:44.833843] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:35.552 [2024-07-25 19:47:44.833881] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:35.552 [2024-07-25 19:47:44.833926] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 19:47:44.833926] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 19:47:44.833927] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:35.552 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:35.552 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:35.552 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.809 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.809 [2024-07-25 19:47:45.010380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.809 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.809 [2024-07-25 19:47:45.085001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.809 [2024-07-25 19:47:45.110759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.809 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.809 [2024-07-25 19:47:45.184838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:35.810 [2024-07-25 19:47:45.210555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.068 [2024-07-25 19:47:45.275459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.068 [2024-07-25 19:47:45.281033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.068 [2024-07-25 19:47:45.342747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:36.068 Running I/O for 1 seconds... 00:17:36.068 Running I/O for 1 seconds... 00:17:36.327 Running I/O for 1 seconds... 00:17:36.327 Running I/O for 1 seconds... 00:17:37.265 00:17:37.265 Latency(us) 00:17:37.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.265 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:37.265 Nvme1n1 : 1.01 11440.51 44.69 0.00 0.00 11140.82 7378.87 21942.42 00:17:37.265 =================================================================================================================== 00:17:37.265 Total : 11440.51 44.69 0.00 0.00 11140.82 7378.87 21942.42 00:17:37.265 00:17:37.265 Latency(us) 00:17:37.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.265 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:37.265 Nvme1n1 : 1.02 5227.98 20.42 0.00 0.00 24109.64 10145.94 42331.40 00:17:37.265 =================================================================================================================== 00:17:37.265 Total : 5227.98 20.42 0.00 0.00 24109.64 10145.94 42331.40 00:17:37.265 00:17:37.265 Latency(us) 00:17:37.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.265 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:37.265 Nvme1n1 : 1.00 197530.67 771.60 0.00 0.00 645.48 270.03 904.15 00:17:37.265 =================================================================================================================== 00:17:37.265 Total : 197530.67 771.60 0.00 0.00 645.48 270.03 904.15 00:17:37.265 00:17:37.265 Latency(us) 00:17:37.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.265 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:37.265 Nvme1n1 : 1.01 5341.97 20.87 0.00 0.00 23862.99 7475.96 49516.09 00:17:37.265 =================================================================================================================== 00:17:37.265 Total : 5341.97 20.87 0.00 0.00 23862.99 7475.96 49516.09 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3962661 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3962663 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3962666 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.523 rmmod nvme_tcp 00:17:37.523 rmmod nvme_fabrics 00:17:37.523 rmmod nvme_keyring 00:17:37.523 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3962511 ']' 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3962511 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3962511 ']' 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3962511 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3962511 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3962511' 00:17:37.782 killing process with pid 3962511 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3962511 00:17:37.782 19:47:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3962511 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.782 19:47:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.320 19:47:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.320 00:17:40.320 real 0m6.916s 00:17:40.320 user 0m15.751s 00:17:40.320 sys 0m3.516s 00:17:40.320 19:47:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:40.320 19:47:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.320 ************************************ 00:17:40.320 END TEST nvmf_bdev_io_wait 00:17:40.320 ************************************ 00:17:40.320 19:47:49 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:40.320 19:47:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:40.320 19:47:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:40.320 19:47:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.320 ************************************ 00:17:40.320 START TEST nvmf_queue_depth 00:17:40.320 ************************************ 00:17:40.320 19:47:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:40.320 * Looking for test storage... 00:17:40.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.320 19:47:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.320 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.321 19:47:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:42.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:42.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.224 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:42.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:42.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:17:42.225 00:17:42.225 --- 10.0.0.2 ping statistics --- 00:17:42.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.225 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:17:42.225 00:17:42.225 --- 10.0.0.1 ping statistics --- 00:17:42.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.225 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3964877 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3964877 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3964877 ']' 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.225 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.225 [2024-07-25 19:47:51.541432] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:42.225 [2024-07-25 19:47:51.541506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.225 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.225 [2024-07-25 19:47:51.605444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.483 [2024-07-25 19:47:51.690201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.483 [2024-07-25 19:47:51.690255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.483 [2024-07-25 19:47:51.690277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.483 [2024-07-25 19:47:51.690288] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.483 [2024-07-25 19:47:51.690298] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.483 [2024-07-25 19:47:51.690337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 [2024-07-25 19:47:51.820535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 Malloc0 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 [2024-07-25 19:47:51.880694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3964906 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3964906 /var/tmp/bdevperf.sock 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3964906 ']' 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.483 19:47:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.742 [2024-07-25 19:47:51.925832] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:42.742 [2024-07-25 19:47:51.925905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964906 ] 00:17:42.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.742 [2024-07-25 19:47:51.989018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.742 [2024-07-25 19:47:52.079217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:43.002 NVMe0n1 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.002 19:47:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.002 Running I/O for 10 seconds... 00:17:55.212 00:17:55.212 Latency(us) 00:17:55.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.212 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:55.212 Verification LBA range: start 0x0 length 0x4000 00:17:55.212 NVMe0n1 : 10.10 8781.37 34.30 0.00 0.00 116026.32 25049.32 73400.32 00:17:55.212 =================================================================================================================== 00:17:55.212 Total : 8781.37 34.30 0.00 0.00 116026.32 25049.32 73400.32 00:17:55.212 0 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3964906 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3964906 ']' 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3964906 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3964906 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3964906' 00:17:55.212 killing process with pid 3964906 00:17:55.212 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3964906 00:17:55.212 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.212 00:17:55.212 Latency(us) 00:17:55.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.213 =================================================================================================================== 00:17:55.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3964906 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.213 rmmod nvme_tcp 00:17:55.213 rmmod nvme_fabrics 00:17:55.213 rmmod nvme_keyring 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3964877 ']' 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3964877 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3964877 ']' 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3964877 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3964877 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3964877' 00:17:55.213 killing process with pid 3964877 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3964877 00:17:55.213 19:48:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3964877 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.213 19:48:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.780 19:48:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.780 00:17:55.780 real 0m15.857s 00:17:55.780 user 0m22.185s 00:17:55.780 sys 0m3.126s 00:17:55.780 19:48:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.780 19:48:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:55.780 ************************************ 00:17:55.780 END TEST nvmf_queue_depth 00:17:55.780 ************************************ 00:17:55.780 19:48:05 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:55.780 19:48:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:55.780 19:48:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.780 19:48:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.780 ************************************ 00:17:55.780 START TEST nvmf_target_multipath 00:17:55.780 ************************************ 00:17:55.780 19:48:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:56.038 * Looking for test storage... 00:17:56.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.038 19:48:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.943 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:17:57.944 00:17:57.944 --- 10.0.0.2 ping statistics --- 00:17:57.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.944 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:17:57.944 00:17:57.944 --- 10.0.0.1 ping statistics --- 00:17:57.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.944 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:57.944 only one NIC for nvmf test 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.944 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.944 rmmod nvme_tcp 00:17:58.202 rmmod nvme_fabrics 00:17:58.202 rmmod nvme_keyring 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.202 19:48:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:00.133 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:00.134 00:18:00.134 real 0m4.269s 00:18:00.134 user 0m0.781s 00:18:00.134 sys 0m1.479s 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:00.134 19:48:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:00.134 ************************************ 00:18:00.134 END TEST nvmf_target_multipath 00:18:00.134 ************************************ 00:18:00.134 19:48:09 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:00.134 19:48:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:00.134 19:48:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:00.134 19:48:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:00.134 ************************************ 00:18:00.134 START TEST nvmf_zcopy 00:18:00.134 ************************************ 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:00.134 * Looking for test storage... 00:18:00.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.134 19:48:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.392 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:00.392 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:00.392 19:48:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:00.392 19:48:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:02.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:02.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.295 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:02.296 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:02.296 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:18:02.296 00:18:02.296 --- 10.0.0.2 ping statistics --- 00:18:02.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.296 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:18:02.296 00:18:02.296 --- 10.0.0.1 ping statistics --- 00:18:02.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.296 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3970569 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3970569 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3970569 ']' 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:02.296 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.296 [2024-07-25 19:48:11.618196] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:02.296 [2024-07-25 19:48:11.618279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.296 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.296 [2024-07-25 19:48:11.683781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.555 [2024-07-25 19:48:11.771089] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.555 [2024-07-25 19:48:11.771185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.555 [2024-07-25 19:48:11.771201] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.555 [2024-07-25 19:48:11.771213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.555 [2024-07-25 19:48:11.771222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.555 [2024-07-25 19:48:11.771250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 [2024-07-25 19:48:11.908279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 [2024-07-25 19:48:11.924515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 malloc0 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:02.555 { 00:18:02.555 "params": { 00:18:02.555 "name": "Nvme$subsystem", 00:18:02.555 "trtype": "$TEST_TRANSPORT", 00:18:02.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.555 "adrfam": "ipv4", 00:18:02.555 "trsvcid": "$NVMF_PORT", 00:18:02.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.555 "hdgst": ${hdgst:-false}, 00:18:02.555 "ddgst": ${ddgst:-false} 00:18:02.555 }, 00:18:02.555 "method": "bdev_nvme_attach_controller" 00:18:02.555 } 00:18:02.555 EOF 00:18:02.555 )") 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:02.555 19:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:02.555 "params": { 00:18:02.555 "name": "Nvme1", 00:18:02.555 "trtype": "tcp", 00:18:02.555 "traddr": "10.0.0.2", 00:18:02.555 "adrfam": "ipv4", 00:18:02.555 "trsvcid": "4420", 00:18:02.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.555 "hdgst": false, 00:18:02.555 "ddgst": false 00:18:02.555 }, 00:18:02.555 "method": "bdev_nvme_attach_controller" 00:18:02.555 }' 00:18:02.815 [2024-07-25 19:48:12.006507] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:02.815 [2024-07-25 19:48:12.006576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3970706 ] 00:18:02.815 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.815 [2024-07-25 19:48:12.068923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.815 [2024-07-25 19:48:12.162676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.073 Running I/O for 10 seconds... 00:18:13.058 00:18:13.058 Latency(us) 00:18:13.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.058 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:13.058 Verification LBA range: start 0x0 length 0x1000 00:18:13.058 Nvme1n1 : 10.01 5699.56 44.53 0.00 0.00 22395.93 2378.71 32622.36 00:18:13.058 =================================================================================================================== 00:18:13.058 Total : 5699.56 44.53 0.00 0.00 22395.93 2378.71 32622.36 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3971899 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.316 { 00:18:13.316 "params": { 00:18:13.316 "name": "Nvme$subsystem", 00:18:13.316 "trtype": "$TEST_TRANSPORT", 00:18:13.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.316 "adrfam": "ipv4", 00:18:13.316 "trsvcid": "$NVMF_PORT", 00:18:13.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.316 "hdgst": ${hdgst:-false}, 00:18:13.316 "ddgst": ${ddgst:-false} 00:18:13.316 }, 00:18:13.316 "method": "bdev_nvme_attach_controller" 00:18:13.316 } 00:18:13.316 EOF 00:18:13.316 )") 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:13.316 [2024-07-25 19:48:22.665649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.665695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:13.316 19:48:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.316 "params": { 00:18:13.316 "name": "Nvme1", 00:18:13.316 "trtype": "tcp", 00:18:13.316 "traddr": "10.0.0.2", 00:18:13.316 "adrfam": "ipv4", 00:18:13.316 "trsvcid": "4420", 00:18:13.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.316 "hdgst": false, 00:18:13.316 "ddgst": false 00:18:13.316 }, 00:18:13.316 "method": "bdev_nvme_attach_controller" 00:18:13.316 }' 00:18:13.316 [2024-07-25 19:48:22.673608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.673636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.681630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.681657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.689650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.689674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.697661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.697681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.705141] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:13.316 [2024-07-25 19:48:22.705201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971899 ] 00:18:13.316 [2024-07-25 19:48:22.705680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.705701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.713705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.713726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.721721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.721740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.729742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.729762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.316 [2024-07-25 19:48:22.737763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.737782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.316 [2024-07-25 19:48:22.745811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.316 [2024-07-25 19:48:22.745842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.753826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.753851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.761849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.761874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.768701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.575 [2024-07-25 19:48:22.769872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.769896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.777932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.777971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.785930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.785961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.793942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.793967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.801963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.801989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.809970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.809991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.818007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.818035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.826057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.826106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.834050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.834083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.842077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.842101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.850097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.850122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.858117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.858140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.859905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.575 [2024-07-25 19:48:22.866138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.866163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.874169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.874197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.882208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.882245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.575 [2024-07-25 19:48:22.890231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.575 [2024-07-25 19:48:22.890278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.898251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.898289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.906274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.906312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.914295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.914334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.922302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.922328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.930342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.930382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.938358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.938398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.946371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.946405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.954381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.954406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.962397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.962422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.970441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.970471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.978450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.978478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.986473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.986500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:22.994500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:22.994528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.576 [2024-07-25 19:48:23.002518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.576 [2024-07-25 19:48:23.002546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.834 [2024-07-25 19:48:23.010542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.834 [2024-07-25 19:48:23.010570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.834 [2024-07-25 19:48:23.018559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.834 [2024-07-25 19:48:23.018587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.026581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.026606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.034608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.034637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 Running I/O for 5 seconds... 00:18:13.835 [2024-07-25 19:48:23.042628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.042654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.057396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.057428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.068844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.068876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.080331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.080365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.092233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.092263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.104131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.104162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.115937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.115967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.127908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.127939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.139261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.139292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.151261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.151291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.162601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.162633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.173956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.173986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.185561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.185592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.197097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.197132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.208704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.208736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.219719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.219751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.230853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.230884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.244724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.244756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.835 [2024-07-25 19:48:23.255847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.835 [2024-07-25 19:48:23.255877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.266953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.097 [2024-07-25 19:48:23.266984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.278312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.097 [2024-07-25 19:48:23.278347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.291446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.097 [2024-07-25 19:48:23.291478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.301927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.097 [2024-07-25 19:48:23.301958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.313906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.097 [2024-07-25 19:48:23.313936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.325469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.097 [2024-07-25 19:48:23.325500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.097 [2024-07-25 19:48:23.336840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.336871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.348260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.348292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.360079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.360118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.371475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.371506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.383464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.383494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.394830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.394860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.406084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.406114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.417379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.417410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.428849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.428880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.440124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.440155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.451204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.451234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.462267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.462298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.473391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.473421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.484878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.484909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.496751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.496781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.508503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.508534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.098 [2024-07-25 19:48:23.520180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.098 [2024-07-25 19:48:23.520211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.533688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.533719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.544431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.544461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.555652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.555683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.566936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.566967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.578196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.578227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.591372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.591402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.602194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.602224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.613957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.613988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.627294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.627324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.638316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.638346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.650068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.359 [2024-07-25 19:48:23.650099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.359 [2024-07-25 19:48:23.661534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.661564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.672937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.672967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.684870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.684902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.696535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.696575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.709747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.709777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.720455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.720485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.731844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.731874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.745289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.745320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.756462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.756492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.767788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.767818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.360 [2024-07-25 19:48:23.779196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.360 [2024-07-25 19:48:23.779226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.790710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.790741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.802257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.802287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.815479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.815509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.828151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.828182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.837951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.837982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.850331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.850361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.861701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.861731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.873066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.873096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.884647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.884678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.896288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.896318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.907697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.907728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.619 [2024-07-25 19:48:23.918898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.619 [2024-07-25 19:48:23.918939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.930184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.930214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.941500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.941532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.952903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.952934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.965216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.965247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.976521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.976552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.989643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.989673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:23.999803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:23.999833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:24.011769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:24.011800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:24.022970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:24.023001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:24.034868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:24.034898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.620 [2024-07-25 19:48:24.046371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.620 [2024-07-25 19:48:24.046401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.057858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.057888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.069151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.069181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.079898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.079928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.091342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.091373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.103077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.103106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.114538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.114568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.128076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.128116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.138883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.138923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.150255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.150285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.161234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.161265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.172344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.172374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.184100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.184131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.195958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.195989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.207901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.207933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.219216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.219247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.230644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.230675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.242442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.242473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.255165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.255192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.880 [2024-07-25 19:48:24.265240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.880 [2024-07-25 19:48:24.265267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.881 [2024-07-25 19:48:24.275337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.881 [2024-07-25 19:48:24.275364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.881 [2024-07-25 19:48:24.285697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.881 [2024-07-25 19:48:24.285724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.881 [2024-07-25 19:48:24.296071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.881 [2024-07-25 19:48:24.296098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.881 [2024-07-25 19:48:24.306551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.881 [2024-07-25 19:48:24.306578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.316831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.316860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.326888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.326915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.337075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.337103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.347729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.347765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.358506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.358535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.368885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.368913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.379340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.379368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.389731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.389759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.400242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.400270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.411052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.411091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.423622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.423649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.433803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.433831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.444080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.444108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.454228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.454256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.464708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.464736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.475347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.475375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.487522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.487550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.497385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.497412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.507691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.507718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.518204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.518231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.528438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.528466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.538874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.538902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.141 [2024-07-25 19:48:24.549154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.141 [2024-07-25 19:48:24.549193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.142 [2024-07-25 19:48:24.559731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.142 [2024-07-25 19:48:24.559759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.142 [2024-07-25 19:48:24.570608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.142 [2024-07-25 19:48:24.570636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.580838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.580866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.591177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.591205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.601257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.601284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.611800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.611827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.622204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.622232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.632662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.632689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.643202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.643230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.401 [2024-07-25 19:48:24.653331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.401 [2024-07-25 19:48:24.653359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.663282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.663309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.673746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.673774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.684409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.684437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.694784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.694812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.705253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.705280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.716212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.716240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.727286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.727313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.737719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.737746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.749730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.749758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.758993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.759020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.770110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.770137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.782394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.782421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.792186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.792214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.802165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.802193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.812321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.812348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.402 [2024-07-25 19:48:24.822721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.402 [2024-07-25 19:48:24.822749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.832986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.833014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.843228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.843256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.854515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.854543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.867118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.867146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.877691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.877719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.888356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.888384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.899201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.899229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.910161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.910189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.920708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.920735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.931656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.931684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.944434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.944462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.954766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.954793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.965809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.965837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.977103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.977131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.987741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.987768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:24.998474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:24.998502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.009520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.009547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.022237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.022265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.032171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.032200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.043091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.043118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.054147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.054190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.065201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.065229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.076133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.076160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.662 [2024-07-25 19:48:25.086958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.662 [2024-07-25 19:48:25.086986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.099652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.099680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.110259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.110287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.121302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.121330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.134396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.134424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.145307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.145335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.156406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.156435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.167317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.167345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.178421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.178448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.191486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.191515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.202005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.202033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.213086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.213114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.920 [2024-07-25 19:48:25.223839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.920 [2024-07-25 19:48:25.223867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.234498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.234527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.245591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.245619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.256452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.256480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.267327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.267355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.278150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.278178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.290840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.290879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.301264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.301292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.312420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.312448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.323160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.323188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.333796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.333824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.921 [2024-07-25 19:48:25.344717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.921 [2024-07-25 19:48:25.344744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.178 [2024-07-25 19:48:25.355589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.178 [2024-07-25 19:48:25.355617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.178 [2024-07-25 19:48:25.368475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.178 [2024-07-25 19:48:25.368514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.178 [2024-07-25 19:48:25.378677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.178 [2024-07-25 19:48:25.378706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.178 [2024-07-25 19:48:25.390563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.178 [2024-07-25 19:48:25.390590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.178 [2024-07-25 19:48:25.401296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.178 [2024-07-25 19:48:25.401325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.411942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.411969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.422662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.422689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.433285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.433313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.444177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.444206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.455335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.455363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.466198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.466225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.478979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.479008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.488478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.488507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.500287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.500316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.511152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.511180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.521948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.521977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.534814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.534842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.545193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.545221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.556412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.556440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.569410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.569438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.579860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.579897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.590957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.590986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.179 [2024-07-25 19:48:25.604439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.179 [2024-07-25 19:48:25.604467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.615320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.615349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.625937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.625966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.636695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.636723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.647551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.647579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.658212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.658240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.668610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.668638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.679082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.679110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.689850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.689878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.702814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.702843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.713627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.713655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.724519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.724547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.737334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.737362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.747449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.747477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.758137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.758165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.771155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.771183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.781465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.781492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.792404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.792441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.803252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.803280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.814434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.814462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.827196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.827223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.837638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.837665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.848477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.848505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.437 [2024-07-25 19:48:25.859675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.437 [2024-07-25 19:48:25.859704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.695 [2024-07-25 19:48:25.870692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.695 [2024-07-25 19:48:25.870720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.695 [2024-07-25 19:48:25.881915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.695 [2024-07-25 19:48:25.881943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.695 [2024-07-25 19:48:25.892961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.695 [2024-07-25 19:48:25.892988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.695 [2024-07-25 19:48:25.905829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.695 [2024-07-25 19:48:25.905857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.695 [2024-07-25 19:48:25.916315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.695 [2024-07-25 19:48:25.916342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.695 [2024-07-25 19:48:25.927047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.695 [2024-07-25 19:48:25.927083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:25.937840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:25.937867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:25.948333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:25.948360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:25.959069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:25.959096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:25.969895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:25.969925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:25.981143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:25.981169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:25.992044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:25.992099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.003186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.003223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.016129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.016175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.026658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.026684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.037510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.037538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.048276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.048304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.059610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.059637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.072684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.072711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.083018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.083045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.093665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.093693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.104179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.104207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.696 [2024-07-25 19:48:26.114615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.696 [2024-07-25 19:48:26.114642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.953 [2024-07-25 19:48:26.125463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.125491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.136387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.136416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.147293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.147321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.160494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.160522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.170908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.170936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.181471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.181498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.192593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.192620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.203633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.203660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.214530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.214564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.225424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.225451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.236544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.236572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.247501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.247529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.260537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.260565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.270692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.270719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.281766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.281794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.294689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.294716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.304683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.304711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.315575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.315618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.328682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.328710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.339353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.339381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.350255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.350283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.362866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.362894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.954 [2024-07-25 19:48:26.372540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.954 [2024-07-25 19:48:26.372567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.212 [2024-07-25 19:48:26.383977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.212 [2024-07-25 19:48:26.384005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.212 [2024-07-25 19:48:26.395450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.212 [2024-07-25 19:48:26.395478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.212 [2024-07-25 19:48:26.406640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.212 [2024-07-25 19:48:26.406667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.212 [2024-07-25 19:48:26.418132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.212 [2024-07-25 19:48:26.418159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.212 [2024-07-25 19:48:26.429285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.212 [2024-07-25 19:48:26.429313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.440777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.440806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.452387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.452414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.465536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.465564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.476162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.476189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.486503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.486530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.497029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.497069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.509851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.509881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.520951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.520983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.532553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.532584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.543871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.543901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.555427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.555457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.567399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.567430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.578967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.578998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.590727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.590757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.602368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.602398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.614306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.614336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.625909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.625939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.213 [2024-07-25 19:48:26.637041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.213 [2024-07-25 19:48:26.637083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.648572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.648604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.659834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.659865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.671622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.671652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.683266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.683297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.694959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.694990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.706216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.706247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.717747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.717786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.729268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.729299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.740874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.740904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.752206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.752236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.765711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.765742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.776645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.776677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.788088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.788122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.799367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.799398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.811107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.811138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.822663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.822694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.834178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.834208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.845747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.845778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.857487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.857518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.869171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.869201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.880638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.880668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.473 [2024-07-25 19:48:26.891883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.473 [2024-07-25 19:48:26.891913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.905360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.905391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.916189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.916220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.927450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.927480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.940506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.940536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.951044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.951085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.963128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.963159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.974320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.974351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.987487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.987518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:26.998256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:26.998286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.009759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.009790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.021310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.021341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.032824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.032855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.046002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.046033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.056052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.056091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.068404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.068435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.079761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.079802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.091234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.091265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.102526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.102556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.113901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.113932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.125506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.125536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.137114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.137144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.148394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.148424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.733 [2024-07-25 19:48:27.159269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.733 [2024-07-25 19:48:27.159300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.170454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.170485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.181613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.181643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.193374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.193404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.204581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.204611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.216017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.216047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.227236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.227266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.240879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.240911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.251600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.251632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.263209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.263239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.274751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.274782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.286173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.286205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.297609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.297650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.309251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.309282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.320828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.320858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.332206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.332236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.343761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.993 [2024-07-25 19:48:27.343792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.993 [2024-07-25 19:48:27.355172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.994 [2024-07-25 19:48:27.355203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.994 [2024-07-25 19:48:27.366767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.994 [2024-07-25 19:48:27.366797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.994 [2024-07-25 19:48:27.378192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.994 [2024-07-25 19:48:27.378223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.994 [2024-07-25 19:48:27.389756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.994 [2024-07-25 19:48:27.389788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.994 [2024-07-25 19:48:27.401491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.994 [2024-07-25 19:48:27.401521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.994 [2024-07-25 19:48:27.414729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.994 [2024-07-25 19:48:27.414759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.252 [2024-07-25 19:48:27.425145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.252 [2024-07-25 19:48:27.425176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.252 [2024-07-25 19:48:27.436451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.252 [2024-07-25 19:48:27.436482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.252 [2024-07-25 19:48:27.447608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.252 [2024-07-25 19:48:27.447638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.252 [2024-07-25 19:48:27.458750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.252 [2024-07-25 19:48:27.458781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.252 [2024-07-25 19:48:27.469813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.252 [2024-07-25 19:48:27.469844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.252 [2024-07-25 19:48:27.480977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.252 [2024-07-25 19:48:27.481008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.493835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.493865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.504430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.504460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.515614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.515655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.527235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.527266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.538600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.538631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.550281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.550312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.561834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.561865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.573180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.573210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.584658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.584689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.596232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.596262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.612110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.612143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.623048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.623087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.634403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.634433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.647693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.647724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.658600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.658630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.670131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.670161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.253 [2024-07-25 19:48:27.681385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.253 [2024-07-25 19:48:27.681414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.693156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.693187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.704703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.704733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.717845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.717875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.728929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.728959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.740109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.740147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.751105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.751136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.762722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.762752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.774388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.774419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.788041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.788079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.799115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.799146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.810366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.810404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.821269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.821299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.832574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.832605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.844020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.844069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.855290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.855321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.866411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.866450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.877774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.877804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.889054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.889093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.900146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.900176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.913389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.913420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.923777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.923807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.511 [2024-07-25 19:48:27.935984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.511 [2024-07-25 19:48:27.936015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.771 [2024-07-25 19:48:27.947546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:27.947577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:27.959365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:27.959404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:27.970938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:27.970968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:27.982249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:27.982279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:27.993579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:27.993609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.004632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.004662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.016136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.016166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.027664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.027694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.041251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.041282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.052577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.052607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.062381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.062411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 00:18:18.772 Latency(us) 00:18:18.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.772 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:18.772 Nvme1n1 : 5.01 11398.27 89.05 0.00 0.00 11214.86 5145.79 24563.86 00:18:18.772 =================================================================================================================== 00:18:18.772 Total : 11398.27 89.05 0.00 0.00 11214.86 5145.79 24563.86 00:18:18.772 [2024-07-25 19:48:28.067657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.067686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.075675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.075705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.083718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.083753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.091765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.091813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.099778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.099825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.107799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.107845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.115820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.115866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.123845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.123892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.131864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.131909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.139883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.139928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.147908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.147953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.155934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.155983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.163957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.164006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.171976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.172023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.179998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.180043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.188021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.188074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.772 [2024-07-25 19:48:28.196043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.772 [2024-07-25 19:48:28.196097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.204070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.204115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.212048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.212087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.220075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.220101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.228134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.228179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.236166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.236215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.244169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.244213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.252155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.252182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.260185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.260218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.268263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.268312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.276267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.276313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.284237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.284262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.292261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.292285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 [2024-07-25 19:48:28.300284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.033 [2024-07-25 19:48:28.300311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3971899) - No such process 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3971899 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.033 delay0 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.033 19:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:19.033 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.033 [2024-07-25 19:48:28.383838] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:27.201 Initializing NVMe Controllers 00:18:27.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:27.201 Initialization complete. Launching workers. 00:18:27.201 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 13511 00:18:27.201 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13679, failed to submit 97 00:18:27.201 success 13567, unsuccess 112, failed 0 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.201 rmmod nvme_tcp 00:18:27.201 rmmod nvme_fabrics 00:18:27.201 rmmod nvme_keyring 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3970569 ']' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3970569 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3970569 ']' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3970569 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3970569 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3970569' 00:18:27.201 killing process with pid 3970569 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3970569 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3970569 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.201 19:48:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.577 19:48:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.577 00:18:28.577 real 0m28.321s 00:18:28.577 user 0m40.804s 00:18:28.577 sys 0m9.289s 00:18:28.577 19:48:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:28.577 19:48:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.577 ************************************ 00:18:28.577 END TEST nvmf_zcopy 00:18:28.577 ************************************ 00:18:28.577 19:48:37 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:28.577 19:48:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:28.577 19:48:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:28.577 19:48:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.577 ************************************ 00:18:28.577 START TEST nvmf_nmic 00:18:28.577 ************************************ 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:28.577 * Looking for test storage... 00:18:28.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.577 19:48:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.578 19:48:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.122 19:48:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:18:31.122 00:18:31.122 --- 10.0.0.2 ping statistics --- 00:18:31.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.122 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:18:31.122 00:18:31.122 --- 10.0.0.1 ping statistics --- 00:18:31.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.122 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.122 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3975280 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3975280 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3975280 ']' 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 [2024-07-25 19:48:40.163349] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:31.123 [2024-07-25 19:48:40.163445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.123 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.123 [2024-07-25 19:48:40.229124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.123 [2024-07-25 19:48:40.320294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.123 [2024-07-25 19:48:40.320379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.123 [2024-07-25 19:48:40.320393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.123 [2024-07-25 19:48:40.320403] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.123 [2024-07-25 19:48:40.320412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.123 [2024-07-25 19:48:40.320486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.123 [2024-07-25 19:48:40.320541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.123 [2024-07-25 19:48:40.320612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.123 [2024-07-25 19:48:40.320615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 [2024-07-25 19:48:40.472868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 Malloc0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 [2024-07-25 19:48:40.526240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:31.123 test case1: single bdev can't be used in multiple subsystems 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.123 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.123 [2024-07-25 19:48:40.550056] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:31.123 [2024-07-25 19:48:40.550095] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:31.123 [2024-07-25 19:48:40.550112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.381 request: 00:18:31.381 { 00:18:31.381 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:31.381 "namespace": { 00:18:31.381 "bdev_name": "Malloc0", 00:18:31.381 "no_auto_visible": false 00:18:31.381 }, 00:18:31.382 "method": "nvmf_subsystem_add_ns", 00:18:31.382 "req_id": 1 00:18:31.382 } 00:18:31.382 Got JSON-RPC error response 00:18:31.382 response: 00:18:31.382 { 00:18:31.382 "code": -32602, 00:18:31.382 "message": "Invalid parameters" 00:18:31.382 } 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:31.382 Adding namespace failed - expected result. 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:31.382 test case2: host connect to nvmf target in multiple paths 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:31.382 [2024-07-25 19:48:40.558176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.382 19:48:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:31.947 19:48:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:32.516 19:48:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:32.516 19:48:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:32.516 19:48:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.517 19:48:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:32.517 19:48:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:34.421 19:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:34.421 19:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:34.421 19:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:34.678 19:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:34.678 19:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.678 19:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:34.678 19:48:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:34.678 [global] 00:18:34.678 thread=1 00:18:34.678 invalidate=1 00:18:34.678 rw=write 00:18:34.678 time_based=1 00:18:34.678 runtime=1 00:18:34.678 ioengine=libaio 00:18:34.678 direct=1 00:18:34.678 bs=4096 00:18:34.678 iodepth=1 00:18:34.678 norandommap=0 00:18:34.678 numjobs=1 00:18:34.678 00:18:34.678 verify_dump=1 00:18:34.678 verify_backlog=512 00:18:34.678 verify_state_save=0 00:18:34.678 do_verify=1 00:18:34.678 verify=crc32c-intel 00:18:34.678 [job0] 00:18:34.678 filename=/dev/nvme0n1 00:18:34.678 Could not set queue depth (nvme0n1) 00:18:34.678 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.678 fio-3.35 00:18:34.678 Starting 1 thread 00:18:36.058 00:18:36.058 job0: (groupid=0, jobs=1): err= 0: pid=3975918: Thu Jul 25 19:48:45 2024 00:18:36.058 read: IOPS=23, BW=93.9KiB/s (96.2kB/s)(96.0KiB/1022msec) 00:18:36.058 slat (nsec): min=7509, max=36877, avg=23223.71, stdev=11026.03 00:18:36.058 clat (usec): min=312, max=41311, avg=37568.05, stdev=11474.67 00:18:36.058 lat (usec): min=328, max=41319, avg=37591.27, stdev=11476.54 00:18:36.058 clat percentiles (usec): 00:18:36.058 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[40633], 20.00th=[40633], 00:18:36.058 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:36.058 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:36.058 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:36.058 | 99.99th=[41157] 00:18:36.058 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:18:36.058 slat (usec): min=7, max=28948, avg=65.49, stdev=1278.97 00:18:36.058 clat (usec): min=134, max=308, avg=158.92, stdev=17.35 00:18:36.058 lat (usec): min=142, max=29206, avg=224.41, stdev=1283.45 00:18:36.058 clat percentiles (usec): 00:18:36.058 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:18:36.058 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:18:36.058 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 188], 00:18:36.058 | 99.00th=[ 202], 99.50th=[ 251], 99.90th=[ 310], 99.95th=[ 310], 00:18:36.058 | 99.99th=[ 310] 00:18:36.058 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:36.058 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:36.058 lat (usec) : 250=94.96%, 500=0.93% 00:18:36.058 lat (msec) : 50=4.10% 00:18:36.058 cpu : usr=0.29%, sys=0.59%, ctx=538, majf=0, minf=2 00:18:36.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.058 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.058 00:18:36.058 Run status group 0 (all jobs): 00:18:36.058 READ: bw=93.9KiB/s (96.2kB/s), 93.9KiB/s-93.9KiB/s (96.2kB/s-96.2kB/s), io=96.0KiB (98.3kB), run=1022-1022msec 00:18:36.058 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:18:36.058 00:18:36.058 Disk stats (read/write): 00:18:36.058 nvme0n1: ios=45/512, merge=0/0, ticks=1702/78, in_queue=1780, util=98.50% 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.058 rmmod nvme_tcp 00:18:36.058 rmmod nvme_fabrics 00:18:36.058 rmmod nvme_keyring 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3975280 ']' 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3975280 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3975280 ']' 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3975280 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3975280 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3975280' 00:18:36.058 killing process with pid 3975280 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3975280 00:18:36.058 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3975280 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.317 19:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.851 19:48:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.851 00:18:38.851 real 0m9.874s 00:18:38.851 user 0m22.392s 00:18:38.851 sys 0m2.335s 00:18:38.851 19:48:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.851 19:48:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:38.851 ************************************ 00:18:38.851 END TEST nvmf_nmic 00:18:38.851 ************************************ 00:18:38.851 19:48:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:38.851 19:48:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:38.851 19:48:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.851 19:48:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:38.851 ************************************ 00:18:38.851 START TEST nvmf_fio_target 00:18:38.851 ************************************ 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:38.851 * Looking for test storage... 00:18:38.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.851 19:48:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:40.757 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:40.757 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.757 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:40.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:40.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:18:40.758 00:18:40.758 --- 10.0.0.2 ping statistics --- 00:18:40.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.758 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:40.758 00:18:40.758 --- 10.0.0.1 ping statistics --- 00:18:40.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.758 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3977988 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3977988 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3977988 ']' 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.758 19:48:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.758 [2024-07-25 19:48:49.982922] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:40.758 [2024-07-25 19:48:49.983000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.758 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.758 [2024-07-25 19:48:50.058578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.758 [2024-07-25 19:48:50.152748] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.758 [2024-07-25 19:48:50.152809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.758 [2024-07-25 19:48:50.152826] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.758 [2024-07-25 19:48:50.152840] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.758 [2024-07-25 19:48:50.152852] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.758 [2024-07-25 19:48:50.152931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.758 [2024-07-25 19:48:50.152986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.758 [2024-07-25 19:48:50.153049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.758 [2024-07-25 19:48:50.153050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.017 19:48:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.275 [2024-07-25 19:48:50.579858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.275 19:48:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:41.533 19:48:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:41.533 19:48:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:41.790 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:41.790 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:42.048 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:42.048 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:42.306 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:42.306 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:42.564 19:48:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:42.822 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:42.822 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.080 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:43.080 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.339 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:43.339 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:43.597 19:48:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:43.855 19:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:43.855 19:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.112 19:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:44.112 19:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:44.369 19:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.626 [2024-07-25 19:48:53.912784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.626 19:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:44.883 19:48:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:45.141 19:48:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:45.708 19:48:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:45.708 19:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:45.708 19:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.708 19:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:45.708 19:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:45.708 19:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:48.269 19:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:48.269 [global] 00:18:48.269 thread=1 00:18:48.269 invalidate=1 00:18:48.269 rw=write 00:18:48.269 time_based=1 00:18:48.269 runtime=1 00:18:48.269 ioengine=libaio 00:18:48.269 direct=1 00:18:48.269 bs=4096 00:18:48.269 iodepth=1 00:18:48.269 norandommap=0 00:18:48.269 numjobs=1 00:18:48.269 00:18:48.269 verify_dump=1 00:18:48.269 verify_backlog=512 00:18:48.269 verify_state_save=0 00:18:48.269 do_verify=1 00:18:48.269 verify=crc32c-intel 00:18:48.269 [job0] 00:18:48.269 filename=/dev/nvme0n1 00:18:48.269 [job1] 00:18:48.269 filename=/dev/nvme0n2 00:18:48.269 [job2] 00:18:48.269 filename=/dev/nvme0n3 00:18:48.269 [job3] 00:18:48.269 filename=/dev/nvme0n4 00:18:48.269 Could not set queue depth (nvme0n1) 00:18:48.269 Could not set queue depth (nvme0n2) 00:18:48.269 Could not set queue depth (nvme0n3) 00:18:48.269 Could not set queue depth (nvme0n4) 00:18:48.269 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.269 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.269 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.269 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.269 fio-3.35 00:18:48.269 Starting 4 threads 00:18:49.208 00:18:49.208 job0: (groupid=0, jobs=1): err= 0: pid=3979059: Thu Jul 25 19:48:58 2024 00:18:49.208 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:49.208 slat (nsec): min=5791, max=42006, avg=15675.29, stdev=4837.59 00:18:49.208 clat (usec): min=234, max=40502, avg=328.75, stdev=1026.22 00:18:49.208 lat (usec): min=240, max=40518, avg=344.42, stdev=1026.33 00:18:49.208 clat percentiles (usec): 00:18:49.208 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 277], 00:18:49.208 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 310], 00:18:49.208 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 330], 95.00th=[ 338], 00:18:49.208 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 578], 99.95th=[40633], 00:18:49.208 | 99.99th=[40633] 00:18:49.208 write: IOPS=1901, BW=7604KiB/s (7787kB/s)(7612KiB/1001msec); 0 zone resets 00:18:49.208 slat (nsec): min=7001, max=52990, avg=17791.84, stdev=6495.86 00:18:49.208 clat (usec): min=159, max=865, avg=220.76, stdev=31.45 00:18:49.208 lat (usec): min=172, max=875, avg=238.55, stdev=31.26 00:18:49.208 clat percentiles (usec): 00:18:49.208 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:18:49.208 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:18:49.208 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 269], 00:18:49.208 | 99.00th=[ 330], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 865], 00:18:49.208 | 99.99th=[ 865] 00:18:49.208 bw ( KiB/s): min= 8192, max= 8192, per=36.66%, avg=8192.00, stdev= 0.00, samples=1 00:18:49.208 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:49.208 lat (usec) : 250=50.65%, 500=49.17%, 750=0.12%, 1000=0.03% 00:18:49.208 lat (msec) : 50=0.03% 00:18:49.208 cpu : usr=4.00%, sys=8.20%, ctx=3439, majf=0, minf=1 00:18:49.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.208 issued rwts: total=1536,1903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.208 job1: (groupid=0, jobs=1): err= 0: pid=3979060: Thu Jul 25 19:48:58 2024 00:18:49.209 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4104KiB/1016msec) 00:18:49.209 slat (nsec): min=5505, max=53915, avg=16306.88, stdev=5069.09 00:18:49.209 clat (usec): min=233, max=42027, avg=622.30, stdev=3447.25 00:18:49.209 lat (usec): min=242, max=42046, avg=638.61, stdev=3447.45 00:18:49.209 clat percentiles (usec): 00:18:49.209 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:18:49.209 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 314], 00:18:49.209 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 437], 95.00th=[ 490], 00:18:49.209 | 99.00th=[ 644], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:18:49.209 | 99.99th=[42206] 00:18:49.209 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:18:49.209 slat (nsec): min=7137, max=51790, avg=15799.91, stdev=6344.89 00:18:49.209 clat (usec): min=156, max=1354, avg=210.42, stdev=51.23 00:18:49.209 lat (usec): min=165, max=1373, avg=226.22, stdev=51.95 00:18:49.209 clat percentiles (usec): 00:18:49.209 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:18:49.209 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:18:49.209 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 269], 00:18:49.209 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 971], 99.95th=[ 1352], 00:18:49.209 | 99.99th=[ 1352] 00:18:49.209 bw ( KiB/s): min= 4096, max= 8192, per=27.50%, avg=6144.00, stdev=2896.31, samples=2 00:18:49.209 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:49.209 lat (usec) : 250=56.48%, 500=41.80%, 750=1.25%, 1000=0.12% 00:18:49.209 lat (msec) : 2=0.04%, 20=0.04%, 50=0.27% 00:18:49.209 cpu : usr=3.35%, sys=5.22%, ctx=2562, majf=0, minf=1 00:18:49.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.209 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.209 job2: (groupid=0, jobs=1): err= 0: pid=3979061: Thu Jul 25 19:48:58 2024 00:18:49.209 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:18:49.209 slat (nsec): min=13428, max=34598, avg=28685.27, stdev=8128.40 00:18:49.209 clat (usec): min=40864, max=42067, avg=41293.81, stdev=491.20 00:18:49.209 lat (usec): min=40898, max=42083, avg=41322.49, stdev=489.64 00:18:49.209 clat percentiles (usec): 00:18:49.209 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:49.209 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:49.209 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:49.209 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:49.209 | 99.99th=[42206] 00:18:49.209 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:18:49.209 slat (nsec): min=6983, max=42891, avg=11735.14, stdev=5901.76 00:18:49.209 clat (usec): min=169, max=1318, avg=214.06, stdev=53.89 00:18:49.209 lat (usec): min=176, max=1336, avg=225.79, stdev=54.53 00:18:49.209 clat percentiles (usec): 00:18:49.209 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:18:49.209 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:18:49.209 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 247], 00:18:49.209 | 99.00th=[ 293], 99.50th=[ 367], 99.90th=[ 1319], 99.95th=[ 1319], 00:18:49.209 | 99.99th=[ 1319] 00:18:49.209 bw ( KiB/s): min= 4096, max= 4096, per=18.33%, avg=4096.00, stdev= 0.00, samples=1 00:18:49.209 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:49.209 lat (usec) : 250=91.95%, 500=3.75% 00:18:49.209 lat (msec) : 2=0.19%, 50=4.12% 00:18:49.209 cpu : usr=0.39%, sys=0.49%, ctx=538, majf=0, minf=1 00:18:49.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.209 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.209 job3: (groupid=0, jobs=1): err= 0: pid=3979062: Thu Jul 25 19:48:58 2024 00:18:49.209 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:49.209 slat (nsec): min=6011, max=58029, avg=16129.73, stdev=6265.92 00:18:49.209 clat (usec): min=229, max=720, avg=338.36, stdev=79.25 00:18:49.209 lat (usec): min=239, max=752, avg=354.49, stdev=81.35 00:18:49.209 clat percentiles (usec): 00:18:49.209 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 269], 00:18:49.209 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 338], 00:18:49.209 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 469], 95.00th=[ 498], 00:18:49.209 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 676], 99.95th=[ 717], 00:18:49.209 | 99.99th=[ 717] 00:18:49.209 write: IOPS=1784, BW=7137KiB/s (7308kB/s)(7144KiB/1001msec); 0 zone resets 00:18:49.209 slat (usec): min=6, max=1461, avg=19.96, stdev=34.92 00:18:49.209 clat (usec): min=161, max=945, avg=225.80, stdev=43.47 00:18:49.209 lat (usec): min=169, max=1878, avg=245.76, stdev=59.31 00:18:49.209 clat percentiles (usec): 00:18:49.209 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:18:49.209 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:18:49.209 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 273], 95.00th=[ 306], 00:18:49.209 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 461], 99.95th=[ 947], 00:18:49.209 | 99.99th=[ 947] 00:18:49.209 bw ( KiB/s): min= 8192, max= 8192, per=36.66%, avg=8192.00, stdev= 0.00, samples=1 00:18:49.209 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:49.209 lat (usec) : 250=47.44%, 500=50.51%, 750=2.02%, 1000=0.03% 00:18:49.209 cpu : usr=3.70%, sys=8.10%, ctx=3325, majf=0, minf=2 00:18:49.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.209 issued rwts: total=1536,1786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.209 00:18:49.209 Run status group 0 (all jobs): 00:18:49.209 READ: bw=15.7MiB/s (16.4MB/s), 85.7KiB/s-6138KiB/s (87.7kB/s-6285kB/s), io=16.1MiB (16.9MB), run=1001-1027msec 00:18:49.209 WRITE: bw=21.8MiB/s (22.9MB/s), 1994KiB/s-7604KiB/s (2042kB/s-7787kB/s), io=22.4MiB (23.5MB), run=1001-1027msec 00:18:49.209 00:18:49.209 Disk stats (read/write): 00:18:49.209 nvme0n1: ios=1372/1536, merge=0/0, ticks=445/318, in_queue=763, util=86.87% 00:18:49.209 nvme0n2: ios=1074/1137, merge=0/0, ticks=604/232, in_queue=836, util=90.75% 00:18:49.209 nvme0n3: ios=74/512, merge=0/0, ticks=1313/105, in_queue=1418, util=93.63% 00:18:49.209 nvme0n4: ios=1266/1536, merge=0/0, ticks=571/338, in_queue=909, util=94.22% 00:18:49.209 19:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:49.209 [global] 00:18:49.209 thread=1 00:18:49.209 invalidate=1 00:18:49.209 rw=randwrite 00:18:49.209 time_based=1 00:18:49.209 runtime=1 00:18:49.209 ioengine=libaio 00:18:49.209 direct=1 00:18:49.209 bs=4096 00:18:49.209 iodepth=1 00:18:49.209 norandommap=0 00:18:49.209 numjobs=1 00:18:49.209 00:18:49.209 verify_dump=1 00:18:49.209 verify_backlog=512 00:18:49.209 verify_state_save=0 00:18:49.209 do_verify=1 00:18:49.209 verify=crc32c-intel 00:18:49.209 [job0] 00:18:49.209 filename=/dev/nvme0n1 00:18:49.209 [job1] 00:18:49.209 filename=/dev/nvme0n2 00:18:49.209 [job2] 00:18:49.209 filename=/dev/nvme0n3 00:18:49.209 [job3] 00:18:49.209 filename=/dev/nvme0n4 00:18:49.209 Could not set queue depth (nvme0n1) 00:18:49.209 Could not set queue depth (nvme0n2) 00:18:49.209 Could not set queue depth (nvme0n3) 00:18:49.209 Could not set queue depth (nvme0n4) 00:18:49.467 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.467 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.467 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.467 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.467 fio-3.35 00:18:49.467 Starting 4 threads 00:18:50.846 00:18:50.846 job0: (groupid=0, jobs=1): err= 0: pid=3979297: Thu Jul 25 19:48:59 2024 00:18:50.846 read: IOPS=999, BW=3996KiB/s (4092kB/s)(4124KiB/1032msec) 00:18:50.846 slat (nsec): min=6748, max=57864, avg=18298.99, stdev=10446.89 00:18:50.846 clat (usec): min=219, max=41278, avg=626.25, stdev=3342.36 00:18:50.846 lat (usec): min=231, max=41293, avg=644.54, stdev=3342.94 00:18:50.846 clat percentiles (usec): 00:18:50.846 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 260], 00:18:50.846 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 367], 00:18:50.846 | 70.00th=[ 429], 80.00th=[ 465], 90.00th=[ 482], 95.00th=[ 502], 00:18:50.846 | 99.00th=[ 553], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:50.846 | 99.99th=[41157] 00:18:50.846 write: IOPS=1488, BW=5953KiB/s (6096kB/s)(6144KiB/1032msec); 0 zone resets 00:18:50.846 slat (usec): min=6, max=18253, avg=25.62, stdev=465.44 00:18:50.846 clat (usec): min=131, max=406, avg=203.66, stdev=44.40 00:18:50.846 lat (usec): min=139, max=18470, avg=229.28, stdev=467.91 00:18:50.846 clat percentiles (usec): 00:18:50.846 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:18:50.846 | 30.00th=[ 165], 40.00th=[ 182], 50.00th=[ 206], 60.00th=[ 223], 00:18:50.846 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 269], 00:18:50.846 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 379], 99.95th=[ 408], 00:18:50.846 | 99.99th=[ 408] 00:18:50.846 bw ( KiB/s): min= 4096, max= 8192, per=38.70%, avg=6144.00, stdev=2896.31, samples=2 00:18:50.846 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:50.846 lat (usec) : 250=56.37%, 500=41.45%, 750=1.91% 00:18:50.846 lat (msec) : 50=0.27% 00:18:50.846 cpu : usr=2.33%, sys=4.36%, ctx=2572, majf=0, minf=1 00:18:50.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.846 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.846 job1: (groupid=0, jobs=1): err= 0: pid=3979298: Thu Jul 25 19:48:59 2024 00:18:50.846 read: IOPS=843, BW=3372KiB/s (3453kB/s)(3480KiB/1032msec) 00:18:50.846 slat (nsec): min=5735, max=39242, avg=14612.38, stdev=4920.44 00:18:50.846 clat (usec): min=261, max=42279, avg=899.81, stdev=4805.24 00:18:50.846 lat (usec): min=267, max=42299, avg=914.42, stdev=4806.26 00:18:50.846 clat percentiles (usec): 00:18:50.846 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:18:50.846 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:18:50.846 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 486], 00:18:50.846 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:50.846 | 99.99th=[42206] 00:18:50.846 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:18:50.846 slat (nsec): min=7102, max=42437, avg=14801.24, stdev=6995.97 00:18:50.846 clat (usec): min=168, max=404, avg=204.11, stdev=22.09 00:18:50.846 lat (usec): min=176, max=413, avg=218.91, stdev=26.24 00:18:50.846 clat percentiles (usec): 00:18:50.846 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:18:50.846 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 210], 00:18:50.846 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 233], 00:18:50.846 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 396], 99.95th=[ 404], 00:18:50.846 | 99.99th=[ 404] 00:18:50.846 bw ( KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1 00:18:50.846 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:50.846 lat (usec) : 250=53.43%, 500=44.72%, 750=1.21% 00:18:50.846 lat (msec) : 50=0.63% 00:18:50.846 cpu : usr=2.62%, sys=3.20%, ctx=1895, majf=0, minf=1 00:18:50.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.847 issued rwts: total=870,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.847 job2: (groupid=0, jobs=1): err= 0: pid=3979299: Thu Jul 25 19:48:59 2024 00:18:50.847 read: IOPS=520, BW=2083KiB/s (2133kB/s)(2112KiB/1014msec) 00:18:50.847 slat (nsec): min=4831, max=58745, avg=16686.87, stdev=8303.27 00:18:50.847 clat (usec): min=225, max=41971, avg=1455.41, stdev=6798.47 00:18:50.847 lat (usec): min=231, max=41988, avg=1472.09, stdev=6799.68 00:18:50.847 clat percentiles (usec): 00:18:50.847 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:18:50.847 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:18:50.847 | 70.00th=[ 281], 80.00th=[ 330], 90.00th=[ 469], 95.00th=[ 486], 00:18:50.847 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:50.847 | 99.99th=[42206] 00:18:50.847 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:18:50.847 slat (nsec): min=5978, max=44187, avg=11683.47, stdev=5116.46 00:18:50.847 clat (usec): min=150, max=337, avg=213.01, stdev=27.96 00:18:50.847 lat (usec): min=158, max=368, avg=224.69, stdev=25.57 00:18:50.847 clat percentiles (usec): 00:18:50.847 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 188], 00:18:50.847 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 221], 00:18:50.847 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 260], 00:18:50.847 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 338], 00:18:50.847 | 99.99th=[ 338] 00:18:50.847 bw ( KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1 00:18:50.847 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:50.847 lat (usec) : 250=66.30%, 500=32.28%, 750=0.39% 00:18:50.847 lat (msec) : 2=0.06%, 50=0.97% 00:18:50.847 cpu : usr=1.48%, sys=1.88%, ctx=1552, majf=0, minf=1 00:18:50.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.847 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.847 job3: (groupid=0, jobs=1): err= 0: pid=3979300: Thu Jul 25 19:48:59 2024 00:18:50.847 read: IOPS=24, BW=98.1KiB/s (100kB/s)(100KiB/1019msec) 00:18:50.847 slat (nsec): min=6490, max=37638, avg=27742.04, stdev=9671.87 00:18:50.847 clat (usec): min=351, max=42064, avg=36396.90, stdev=13525.06 00:18:50.847 lat (usec): min=388, max=42080, avg=36424.64, stdev=13522.82 00:18:50.847 clat percentiles (usec): 00:18:50.847 | 1.00th=[ 351], 5.00th=[ 379], 10.00th=[ 865], 20.00th=[40633], 00:18:50.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:50.847 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:50.847 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:50.847 | 99.99th=[42206] 00:18:50.847 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:18:50.847 slat (nsec): min=6456, max=34537, avg=7433.38, stdev=2065.99 00:18:50.847 clat (usec): min=169, max=402, avg=194.91, stdev=26.12 00:18:50.847 lat (usec): min=176, max=414, avg=202.34, stdev=26.74 00:18:50.847 clat percentiles (usec): 00:18:50.847 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 182], 00:18:50.847 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:18:50.847 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 219], 95.00th=[ 231], 00:18:50.847 | 99.00th=[ 338], 99.50th=[ 379], 99.90th=[ 404], 99.95th=[ 404], 00:18:50.847 | 99.99th=[ 404] 00:18:50.847 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:50.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:50.847 lat (usec) : 250=93.11%, 500=2.61%, 1000=0.19% 00:18:50.847 lat (msec) : 50=4.10% 00:18:50.847 cpu : usr=0.20%, sys=0.39%, ctx=538, majf=0, minf=2 00:18:50.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.847 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.847 00:18:50.847 Run status group 0 (all jobs): 00:18:50.847 READ: bw=9512KiB/s (9740kB/s), 98.1KiB/s-3996KiB/s (100kB/s-4092kB/s), io=9816KiB (10.1MB), run=1014-1032msec 00:18:50.847 WRITE: bw=15.5MiB/s (16.3MB/s), 2010KiB/s-5953KiB/s (2058kB/s-6096kB/s), io=16.0MiB (16.8MB), run=1014-1032msec 00:18:50.847 00:18:50.847 Disk stats (read/write): 00:18:50.847 nvme0n1: ios=1050/1536, merge=0/0, ticks=1411/306, in_queue=1717, util=96.89% 00:18:50.847 nvme0n2: ios=911/1024, merge=0/0, ticks=1336/199, in_queue=1535, util=97.15% 00:18:50.847 nvme0n3: ios=523/1024, merge=0/0, ticks=555/210, in_queue=765, util=88.80% 00:18:50.847 nvme0n4: ios=73/512, merge=0/0, ticks=1062/98, in_queue=1160, util=97.25% 00:18:50.847 19:49:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:50.847 [global] 00:18:50.847 thread=1 00:18:50.847 invalidate=1 00:18:50.847 rw=write 00:18:50.847 time_based=1 00:18:50.847 runtime=1 00:18:50.847 ioengine=libaio 00:18:50.847 direct=1 00:18:50.847 bs=4096 00:18:50.847 iodepth=128 00:18:50.847 norandommap=0 00:18:50.847 numjobs=1 00:18:50.847 00:18:50.847 verify_dump=1 00:18:50.847 verify_backlog=512 00:18:50.847 verify_state_save=0 00:18:50.847 do_verify=1 00:18:50.847 verify=crc32c-intel 00:18:50.847 [job0] 00:18:50.847 filename=/dev/nvme0n1 00:18:50.847 [job1] 00:18:50.847 filename=/dev/nvme0n2 00:18:50.847 [job2] 00:18:50.847 filename=/dev/nvme0n3 00:18:50.847 [job3] 00:18:50.847 filename=/dev/nvme0n4 00:18:50.847 Could not set queue depth (nvme0n1) 00:18:50.847 Could not set queue depth (nvme0n2) 00:18:50.847 Could not set queue depth (nvme0n3) 00:18:50.847 Could not set queue depth (nvme0n4) 00:18:50.847 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.847 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.847 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.847 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.847 fio-3.35 00:18:50.847 Starting 4 threads 00:18:52.226 00:18:52.226 job0: (groupid=0, jobs=1): err= 0: pid=3979526: Thu Jul 25 19:49:01 2024 00:18:52.226 read: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(12.1MiB/1016msec) 00:18:52.226 slat (usec): min=3, max=19488, avg=170.95, stdev=1288.66 00:18:52.226 clat (usec): min=5161, max=55976, avg=20036.66, stdev=9060.58 00:18:52.226 lat (usec): min=5176, max=55993, avg=20207.61, stdev=9141.87 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 6849], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11338], 00:18:52.226 | 30.00th=[12256], 40.00th=[20317], 50.00th=[20317], 60.00th=[20841], 00:18:52.226 | 70.00th=[21103], 80.00th=[23200], 90.00th=[32637], 95.00th=[37487], 00:18:52.226 | 99.00th=[51643], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:18:52.226 | 99.99th=[55837] 00:18:52.226 write: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec); 0 zone resets 00:18:52.226 slat (usec): min=5, max=21355, avg=122.04, stdev=691.91 00:18:52.226 clat (usec): min=3693, max=59361, avg=18549.73, stdev=8196.86 00:18:52.226 lat (usec): min=3710, max=59381, avg=18671.77, stdev=8254.29 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 4686], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[ 9634], 00:18:52.226 | 30.00th=[16319], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:18:52.226 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22676], 95.00th=[28967], 00:18:52.226 | 99.00th=[53740], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:18:52.226 | 99.99th=[59507] 00:18:52.226 bw ( KiB/s): min=12504, max=15320, per=27.09%, avg=13912.00, stdev=1991.21, samples=2 00:18:52.226 iops : min= 3126, max= 3830, avg=3478.00, stdev=497.80, samples=2 00:18:52.226 lat (msec) : 4=0.18%, 10=15.38%, 20=25.95%, 50=56.96%, 100=1.53% 00:18:52.226 cpu : usr=5.81%, sys=7.39%, ctx=383, majf=0, minf=1 00:18:52.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.226 issued rwts: total=3094,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.226 job1: (groupid=0, jobs=1): err= 0: pid=3979527: Thu Jul 25 19:49:01 2024 00:18:52.226 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:18:52.226 slat (usec): min=2, max=12654, avg=174.39, stdev=937.34 00:18:52.226 clat (usec): min=8180, max=45547, avg=22153.56, stdev=7725.56 00:18:52.226 lat (usec): min=8220, max=45584, avg=22327.94, stdev=7811.62 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 8848], 5.00th=[11076], 10.00th=[12125], 20.00th=[14746], 00:18:52.226 | 30.00th=[16909], 40.00th=[19006], 50.00th=[22152], 60.00th=[24511], 00:18:52.226 | 70.00th=[26870], 80.00th=[29230], 90.00th=[32113], 95.00th=[35390], 00:18:52.226 | 99.00th=[39060], 99.50th=[39060], 99.90th=[43254], 99.95th=[45351], 00:18:52.226 | 99.99th=[45351] 00:18:52.226 write: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1002msec); 0 zone resets 00:18:52.226 slat (usec): min=4, max=10002, avg=136.94, stdev=725.16 00:18:52.226 clat (usec): min=1058, max=41114, avg=18328.23, stdev=5459.64 00:18:52.226 lat (usec): min=1070, max=41128, avg=18465.17, stdev=5521.80 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 4817], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:18:52.226 | 30.00th=[13304], 40.00th=[17957], 50.00th=[19530], 60.00th=[19792], 00:18:52.226 | 70.00th=[20579], 80.00th=[23462], 90.00th=[24773], 95.00th=[26084], 00:18:52.226 | 99.00th=[33162], 99.50th=[33162], 99.90th=[35390], 99.95th=[38536], 00:18:52.226 | 99.99th=[41157] 00:18:52.226 bw ( KiB/s): min=12288, max=12432, per=24.07%, avg=12360.00, stdev=101.82, samples=2 00:18:52.226 iops : min= 3072, max= 3108, avg=3090.00, stdev=25.46, samples=2 00:18:52.226 lat (msec) : 2=0.19%, 10=3.12%, 20=50.39%, 50=46.29% 00:18:52.226 cpu : usr=3.00%, sys=5.09%, ctx=310, majf=0, minf=1 00:18:52.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.226 issued rwts: total=3072,3201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.226 job2: (groupid=0, jobs=1): err= 0: pid=3979528: Thu Jul 25 19:49:01 2024 00:18:52.226 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:18:52.226 slat (usec): min=3, max=7937, avg=121.94, stdev=650.67 00:18:52.226 clat (usec): min=8277, max=27476, avg=15550.37, stdev=2883.31 00:18:52.226 lat (usec): min=8294, max=27515, avg=15672.31, stdev=2935.57 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 9503], 5.00th=[11600], 10.00th=[12649], 20.00th=[13173], 00:18:52.226 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15401], 60.00th=[16057], 00:18:52.226 | 70.00th=[16319], 80.00th=[17171], 90.00th=[19006], 95.00th=[21627], 00:18:52.226 | 99.00th=[24249], 99.50th=[25035], 99.90th=[26870], 99.95th=[26870], 00:18:52.226 | 99.99th=[27395] 00:18:52.226 write: IOPS=3630, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1002msec); 0 zone resets 00:18:52.226 slat (usec): min=5, max=7514, avg=141.86, stdev=642.38 00:18:52.226 clat (usec): min=567, max=34206, avg=19468.43, stdev=6168.85 00:18:52.226 lat (usec): min=3530, max=34226, avg=19610.29, stdev=6218.73 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 5080], 5.00th=[12649], 10.00th=[12911], 20.00th=[13960], 00:18:52.226 | 30.00th=[15795], 40.00th=[16319], 50.00th=[18744], 60.00th=[19792], 00:18:52.226 | 70.00th=[20579], 80.00th=[24773], 90.00th=[29754], 95.00th=[32375], 00:18:52.226 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:18:52.226 | 99.99th=[34341] 00:18:52.226 bw ( KiB/s): min=12344, max=16328, per=27.91%, avg=14336.00, stdev=2817.11, samples=2 00:18:52.226 iops : min= 3086, max= 4082, avg=3584.00, stdev=704.28, samples=2 00:18:52.226 lat (usec) : 750=0.01% 00:18:52.226 lat (msec) : 4=0.21%, 10=1.56%, 20=76.47%, 50=21.74% 00:18:52.226 cpu : usr=6.09%, sys=9.59%, ctx=399, majf=0, minf=1 00:18:52.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.226 issued rwts: total=3584,3638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.226 job3: (groupid=0, jobs=1): err= 0: pid=3979529: Thu Jul 25 19:49:01 2024 00:18:52.226 read: IOPS=2787, BW=10.9MiB/s (11.4MB/s)(11.4MiB/1051msec) 00:18:52.226 slat (usec): min=3, max=18651, avg=169.70, stdev=1205.28 00:18:52.226 clat (usec): min=6607, max=56500, avg=21977.35, stdev=8456.75 00:18:52.226 lat (usec): min=6619, max=64482, avg=22147.05, stdev=8508.01 00:18:52.226 clat percentiles (usec): 00:18:52.226 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[13566], 20.00th=[16712], 00:18:52.226 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20317], 60.00th=[20841], 00:18:52.226 | 70.00th=[21103], 80.00th=[26084], 90.00th=[31851], 95.00th=[36963], 00:18:52.226 | 99.00th=[51643], 99.50th=[52167], 99.90th=[56361], 99.95th=[56361], 00:18:52.226 | 99.99th=[56361] 00:18:52.226 write: IOPS=2922, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1051msec); 0 zone resets 00:18:52.226 slat (usec): min=4, max=16199, avg=158.80, stdev=795.66 00:18:52.226 clat (usec): min=1300, max=93472, avg=22436.02, stdev=13417.61 00:18:52.227 lat (usec): min=1310, max=93482, avg=22594.82, stdev=13503.28 00:18:52.227 clat percentiles (usec): 00:18:52.227 | 1.00th=[ 6063], 5.00th=[10945], 10.00th=[12256], 20.00th=[19268], 00:18:52.227 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20579], 60.00th=[20579], 00:18:52.227 | 70.00th=[21103], 80.00th=[21365], 90.00th=[23725], 95.00th=[46400], 00:18:52.227 | 99.00th=[88605], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:18:52.227 | 99.99th=[93848] 00:18:52.227 bw ( KiB/s): min=11984, max=12592, per=23.92%, avg=12288.00, stdev=429.92, samples=2 00:18:52.227 iops : min= 2996, max= 3148, avg=3072.00, stdev=107.48, samples=2 00:18:52.227 lat (msec) : 2=0.03%, 4=0.10%, 10=2.38%, 20=31.57%, 50=61.43% 00:18:52.227 lat (msec) : 100=4.48% 00:18:52.227 cpu : usr=2.10%, sys=4.29%, ctx=394, majf=0, minf=1 00:18:52.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:52.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.227 issued rwts: total=2930,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.227 00:18:52.227 Run status group 0 (all jobs): 00:18:52.227 READ: bw=47.1MiB/s (49.4MB/s), 10.9MiB/s-14.0MiB/s (11.4MB/s-14.7MB/s), io=49.5MiB (51.9MB), run=1002-1051msec 00:18:52.227 WRITE: bw=50.2MiB/s (52.6MB/s), 11.4MiB/s-14.2MiB/s (12.0MB/s-14.9MB/s), io=52.7MiB (55.3MB), run=1002-1051msec 00:18:52.227 00:18:52.227 Disk stats (read/write): 00:18:52.227 nvme0n1: ios=2324/2560, merge=0/0, ticks=52555/47152, in_queue=99707, util=99.10% 00:18:52.227 nvme0n2: ios=2360/2560, merge=0/0, ticks=19488/14452, in_queue=33940, util=97.84% 00:18:52.227 nvme0n3: ios=2606/3055, merge=0/0, ticks=21168/28280, in_queue=49448, util=96.97% 00:18:52.227 nvme0n4: ios=2558/2560, merge=0/0, ticks=52239/47984, in_queue=100223, util=97.58% 00:18:52.227 19:49:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:52.227 [global] 00:18:52.227 thread=1 00:18:52.227 invalidate=1 00:18:52.227 rw=randwrite 00:18:52.227 time_based=1 00:18:52.227 runtime=1 00:18:52.227 ioengine=libaio 00:18:52.227 direct=1 00:18:52.227 bs=4096 00:18:52.227 iodepth=128 00:18:52.227 norandommap=0 00:18:52.227 numjobs=1 00:18:52.227 00:18:52.227 verify_dump=1 00:18:52.227 verify_backlog=512 00:18:52.227 verify_state_save=0 00:18:52.227 do_verify=1 00:18:52.227 verify=crc32c-intel 00:18:52.227 [job0] 00:18:52.227 filename=/dev/nvme0n1 00:18:52.227 [job1] 00:18:52.227 filename=/dev/nvme0n2 00:18:52.227 [job2] 00:18:52.227 filename=/dev/nvme0n3 00:18:52.227 [job3] 00:18:52.227 filename=/dev/nvme0n4 00:18:52.227 Could not set queue depth (nvme0n1) 00:18:52.227 Could not set queue depth (nvme0n2) 00:18:52.227 Could not set queue depth (nvme0n3) 00:18:52.227 Could not set queue depth (nvme0n4) 00:18:52.484 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.484 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.484 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.484 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.484 fio-3.35 00:18:52.484 Starting 4 threads 00:18:53.857 00:18:53.857 job0: (groupid=0, jobs=1): err= 0: pid=3979780: Thu Jul 25 19:49:02 2024 00:18:53.857 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:18:53.857 slat (usec): min=2, max=12045, avg=129.98, stdev=802.19 00:18:53.857 clat (usec): min=5429, max=47404, avg=15878.15, stdev=7539.20 00:18:53.857 lat (usec): min=5441, max=47416, avg=16008.13, stdev=7609.90 00:18:53.857 clat percentiles (usec): 00:18:53.857 | 1.00th=[ 6325], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10814], 00:18:53.857 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12387], 60.00th=[13829], 00:18:53.857 | 70.00th=[16712], 80.00th=[20579], 90.00th=[28443], 95.00th=[32375], 00:18:53.857 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41681], 99.95th=[42730], 00:18:53.857 | 99.99th=[47449] 00:18:53.857 write: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1003msec); 0 zone resets 00:18:53.858 slat (usec): min=3, max=14033, avg=101.54, stdev=659.91 00:18:53.858 clat (usec): min=489, max=60371, avg=14343.37, stdev=6519.67 00:18:53.858 lat (usec): min=2992, max=60377, avg=14444.91, stdev=6564.22 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 5014], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10552], 00:18:53.858 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12387], 60.00th=[12518], 00:18:53.858 | 70.00th=[13829], 80.00th=[16909], 90.00th=[25560], 95.00th=[28181], 00:18:53.858 | 99.00th=[39584], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:18:53.858 | 99.99th=[60556] 00:18:53.858 bw ( KiB/s): min=16384, max=17128, per=26.61%, avg=16756.00, stdev=526.09, samples=2 00:18:53.858 iops : min= 4096, max= 4282, avg=4189.00, stdev=131.52, samples=2 00:18:53.858 lat (usec) : 500=0.01% 00:18:53.858 lat (msec) : 4=0.42%, 10=10.58%, 20=71.04%, 50=17.65%, 100=0.30% 00:18:53.858 cpu : usr=3.39%, sys=5.99%, ctx=370, majf=0, minf=1 00:18:53.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:53.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.858 issued rwts: total=4096,4317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.858 job1: (groupid=0, jobs=1): err= 0: pid=3979801: Thu Jul 25 19:49:02 2024 00:18:53.858 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:18:53.858 slat (usec): min=2, max=21913, avg=134.15, stdev=904.80 00:18:53.858 clat (usec): min=4122, max=51567, avg=17377.84, stdev=9374.28 00:18:53.858 lat (usec): min=4127, max=51576, avg=17511.99, stdev=9418.84 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10683], 00:18:53.858 | 30.00th=[11469], 40.00th=[12125], 50.00th=[13698], 60.00th=[15270], 00:18:53.858 | 70.00th=[18482], 80.00th=[22938], 90.00th=[32375], 95.00th=[38536], 00:18:53.858 | 99.00th=[49021], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:18:53.858 | 99.99th=[51643] 00:18:53.858 write: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1007msec); 0 zone resets 00:18:53.858 slat (usec): min=3, max=18984, avg=100.82, stdev=659.08 00:18:53.858 clat (usec): min=650, max=42272, avg=13371.16, stdev=4371.44 00:18:53.858 lat (usec): min=741, max=42278, avg=13471.98, stdev=4413.91 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10683], 00:18:53.858 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11994], 60.00th=[12649], 00:18:53.858 | 70.00th=[14353], 80.00th=[15401], 90.00th=[19006], 95.00th=[23200], 00:18:53.858 | 99.00th=[32637], 99.50th=[33162], 99.90th=[33817], 99.95th=[34866], 00:18:53.858 | 99.99th=[42206] 00:18:53.858 bw ( KiB/s): min=16344, max=16520, per=26.10%, avg=16432.00, stdev=124.45, samples=2 00:18:53.858 iops : min= 4086, max= 4130, avg=4108.00, stdev=31.11, samples=2 00:18:53.858 lat (usec) : 750=0.04% 00:18:53.858 lat (msec) : 10=9.73%, 20=73.86%, 50=16.00%, 100=0.37% 00:18:53.858 cpu : usr=3.68%, sys=5.07%, ctx=344, majf=0, minf=1 00:18:53.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:53.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.858 issued rwts: total=4096,4236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.858 job2: (groupid=0, jobs=1): err= 0: pid=3979836: Thu Jul 25 19:49:02 2024 00:18:53.858 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:18:53.858 slat (usec): min=2, max=26612, avg=148.89, stdev=1016.30 00:18:53.858 clat (usec): min=5661, max=84225, avg=18918.11, stdev=11939.04 00:18:53.858 lat (usec): min=5665, max=84244, avg=19067.00, stdev=12006.98 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11600], 20.00th=[12649], 00:18:53.858 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14746], 60.00th=[15795], 00:18:53.858 | 70.00th=[18482], 80.00th=[23987], 90.00th=[28181], 95.00th=[38011], 00:18:53.858 | 99.00th=[72877], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:18:53.858 | 99.99th=[84411] 00:18:53.858 write: IOPS=3689, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec); 0 zone resets 00:18:53.858 slat (usec): min=3, max=23762, avg=114.65, stdev=810.52 00:18:53.858 clat (usec): min=3368, max=53683, avg=16058.74, stdev=7611.83 00:18:53.858 lat (usec): min=6450, max=53689, avg=16173.39, stdev=7672.87 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 7111], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11863], 00:18:53.858 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:18:53.858 | 70.00th=[15270], 80.00th=[18744], 90.00th=[28967], 95.00th=[33424], 00:18:53.858 | 99.00th=[50594], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:18:53.858 | 99.99th=[53740] 00:18:53.858 bw ( KiB/s): min=13264, max=15496, per=22.84%, avg=14380.00, stdev=1578.26, samples=2 00:18:53.858 iops : min= 3316, max= 3874, avg=3595.00, stdev=394.57, samples=2 00:18:53.858 lat (msec) : 4=0.01%, 10=6.26%, 20=72.52%, 50=19.36%, 100=1.85% 00:18:53.858 cpu : usr=3.48%, sys=5.96%, ctx=336, majf=0, minf=1 00:18:53.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:53.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.858 issued rwts: total=3584,3715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.858 job3: (groupid=0, jobs=1): err= 0: pid=3979849: Thu Jul 25 19:49:02 2024 00:18:53.858 read: IOPS=3421, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1007msec) 00:18:53.858 slat (usec): min=2, max=19604, avg=126.72, stdev=871.70 00:18:53.858 clat (usec): min=4561, max=38102, avg=17426.91, stdev=6782.69 00:18:53.858 lat (usec): min=4978, max=38141, avg=17553.63, stdev=6841.05 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 6456], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[12387], 00:18:53.858 | 30.00th=[12649], 40.00th=[13698], 50.00th=[14877], 60.00th=[17957], 00:18:53.858 | 70.00th=[20841], 80.00th=[24249], 90.00th=[27919], 95.00th=[30016], 00:18:53.858 | 99.00th=[35914], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:18:53.858 | 99.99th=[38011] 00:18:53.858 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:18:53.858 slat (usec): min=3, max=21799, avg=137.24, stdev=910.08 00:18:53.858 clat (usec): min=557, max=46189, avg=18913.48, stdev=9314.36 00:18:53.858 lat (usec): min=575, max=46194, avg=19050.72, stdev=9382.67 00:18:53.858 clat percentiles (usec): 00:18:53.858 | 1.00th=[ 2933], 5.00th=[ 6456], 10.00th=[ 8029], 20.00th=[11338], 00:18:53.858 | 30.00th=[13042], 40.00th=[14746], 50.00th=[17957], 60.00th=[21627], 00:18:53.858 | 70.00th=[22676], 80.00th=[25822], 90.00th=[31589], 95.00th=[39060], 00:18:53.858 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:18:53.858 | 99.99th=[46400] 00:18:53.858 bw ( KiB/s): min=13232, max=15440, per=22.77%, avg=14336.00, stdev=1561.29, samples=2 00:18:53.858 iops : min= 3308, max= 3860, avg=3584.00, stdev=390.32, samples=2 00:18:53.858 lat (usec) : 750=0.04% 00:18:53.858 lat (msec) : 4=1.20%, 10=12.31%, 20=48.41%, 50=38.04% 00:18:53.858 cpu : usr=2.39%, sys=5.37%, ctx=262, majf=0, minf=1 00:18:53.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:53.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.858 issued rwts: total=3445,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.858 00:18:53.858 Run status group 0 (all jobs): 00:18:53.858 READ: bw=59.0MiB/s (61.9MB/s), 13.4MiB/s-16.0MiB/s (14.0MB/s-16.7MB/s), io=59.5MiB (62.3MB), run=1003-1007msec 00:18:53.858 WRITE: bw=61.5MiB/s (64.5MB/s), 13.9MiB/s-16.8MiB/s (14.6MB/s-17.6MB/s), io=61.9MiB (64.9MB), run=1003-1007msec 00:18:53.858 00:18:53.858 Disk stats (read/write): 00:18:53.858 nvme0n1: ios=3452/3584, merge=0/0, ticks=24394/20149, in_queue=44543, util=92.28% 00:18:53.858 nvme0n2: ios=3471/3584, merge=0/0, ticks=18970/18462, in_queue=37432, util=90.54% 00:18:53.858 nvme0n3: ios=3094/3424, merge=0/0, ticks=21204/21398, in_queue=42602, util=97.80% 00:18:53.858 nvme0n4: ios=2587/2783, merge=0/0, ticks=28738/36954, in_queue=65692, util=88.47% 00:18:53.858 19:49:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:53.858 19:49:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3980014 00:18:53.858 19:49:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:53.858 19:49:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:53.858 [global] 00:18:53.858 thread=1 00:18:53.858 invalidate=1 00:18:53.858 rw=read 00:18:53.858 time_based=1 00:18:53.858 runtime=10 00:18:53.858 ioengine=libaio 00:18:53.858 direct=1 00:18:53.858 bs=4096 00:18:53.858 iodepth=1 00:18:53.858 norandommap=1 00:18:53.858 numjobs=1 00:18:53.858 00:18:53.858 [job0] 00:18:53.858 filename=/dev/nvme0n1 00:18:53.858 [job1] 00:18:53.858 filename=/dev/nvme0n2 00:18:53.858 [job2] 00:18:53.858 filename=/dev/nvme0n3 00:18:53.858 [job3] 00:18:53.858 filename=/dev/nvme0n4 00:18:53.858 Could not set queue depth (nvme0n1) 00:18:53.858 Could not set queue depth (nvme0n2) 00:18:53.858 Could not set queue depth (nvme0n3) 00:18:53.858 Could not set queue depth (nvme0n4) 00:18:53.858 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.859 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.859 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.859 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.859 fio-3.35 00:18:53.859 Starting 4 threads 00:18:57.141 19:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:57.141 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=34488320, buflen=4096 00:18:57.141 fio: pid=3980110, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:57.141 19:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:57.398 19:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:57.398 19:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:57.398 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20770816, buflen=4096 00:18:57.398 fio: pid=3980109, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:57.680 19:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:57.680 19:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:57.680 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=15409152, buflen=4096 00:18:57.680 fio: pid=3980106, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:57.680 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=499712, buflen=4096 00:18:57.680 fio: pid=3980107, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:57.946 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:57.946 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:57.946 00:18:57.946 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3980106: Thu Jul 25 19:49:07 2024 00:18:57.946 read: IOPS=1087, BW=4348KiB/s (4452kB/s)(14.7MiB/3461msec) 00:18:57.946 slat (usec): min=4, max=7787, avg=13.53, stdev=126.95 00:18:57.946 clat (usec): min=216, max=50684, avg=897.76, stdev=4956.73 00:18:57.946 lat (usec): min=221, max=50700, avg=909.23, stdev=4958.24 00:18:57.946 clat percentiles (usec): 00:18:57.946 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:18:57.946 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 285], 00:18:57.946 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 383], 95.00th=[ 469], 00:18:57.946 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:18:57.946 | 99.99th=[50594] 00:18:57.946 bw ( KiB/s): min= 96, max=14232, per=24.49%, avg=4626.67, stdev=5936.71, samples=6 00:18:57.946 iops : min= 24, max= 3558, avg=1156.67, stdev=1484.18, samples=6 00:18:57.946 lat (usec) : 250=30.53%, 500=65.88%, 750=2.05% 00:18:57.946 lat (msec) : 4=0.03%, 50=1.46%, 100=0.03% 00:18:57.946 cpu : usr=0.58%, sys=1.36%, ctx=3767, majf=0, minf=1 00:18:57.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.946 issued rwts: total=3763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.947 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3980107: Thu Jul 25 19:49:07 2024 00:18:57.947 read: IOPS=33, BW=133KiB/s (136kB/s)(488KiB/3679msec) 00:18:57.947 slat (usec): min=6, max=3900, avg=50.87, stdev=350.07 00:18:57.947 clat (usec): min=256, max=45539, avg=29911.66, stdev=18444.83 00:18:57.947 lat (usec): min=268, max=45552, avg=29962.78, stdev=18470.90 00:18:57.947 clat percentiles (usec): 00:18:57.947 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 363], 20.00th=[ 404], 00:18:57.947 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:57.947 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:57.947 | 99.00th=[42206], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:18:57.947 | 99.99th=[45351] 00:18:57.947 bw ( KiB/s): min= 96, max= 272, per=0.68%, avg=128.71, stdev=63.78, samples=7 00:18:57.947 iops : min= 24, max= 68, avg=32.14, stdev=15.96, samples=7 00:18:57.947 lat (usec) : 500=26.02%, 750=1.63% 00:18:57.947 lat (msec) : 50=71.54% 00:18:57.947 cpu : usr=0.14%, sys=0.00%, ctx=127, majf=0, minf=1 00:18:57.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.947 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.947 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.947 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3980109: Thu Jul 25 19:49:07 2024 00:18:57.947 read: IOPS=1596, BW=6385KiB/s (6538kB/s)(19.8MiB/3177msec) 00:18:57.947 slat (usec): min=6, max=8974, avg=11.52, stdev=165.43 00:18:57.947 clat (usec): min=222, max=41982, avg=608.46, stdev=3647.43 00:18:57.947 lat (usec): min=231, max=41997, avg=619.97, stdev=3651.56 00:18:57.947 clat percentiles (usec): 00:18:57.947 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:18:57.947 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:18:57.947 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 338], 00:18:57.947 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:57.947 | 99.99th=[42206] 00:18:57.947 bw ( KiB/s): min= 96, max=14288, per=33.16%, avg=6264.00, stdev=7003.92, samples=6 00:18:57.947 iops : min= 24, max= 3572, avg=1566.00, stdev=1750.98, samples=6 00:18:57.947 lat (usec) : 250=12.34%, 500=85.55%, 750=1.26%, 1000=0.02% 00:18:57.947 lat (msec) : 50=0.81% 00:18:57.947 cpu : usr=0.54%, sys=2.33%, ctx=5077, majf=0, minf=1 00:18:57.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.947 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.947 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.947 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3980110: Thu Jul 25 19:49:07 2024 00:18:57.947 read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(32.9MiB/2900msec) 00:18:57.947 slat (nsec): min=5484, max=70303, avg=12186.79, stdev=6539.20 00:18:57.947 clat (usec): min=211, max=41317, avg=327.80, stdev=1471.68 00:18:57.947 lat (usec): min=218, max=41325, avg=339.99, stdev=1472.00 00:18:57.947 clat percentiles (usec): 00:18:57.947 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:18:57.947 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:18:57.947 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 330], 00:18:57.947 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[41157], 99.95th=[41157], 00:18:57.947 | 99.99th=[41157] 00:18:57.947 bw ( KiB/s): min= 6424, max=15480, per=60.87%, avg=11499.20, stdev=3737.75, samples=5 00:18:57.947 iops : min= 1606, max= 3870, avg=2874.80, stdev=934.44, samples=5 00:18:57.947 lat (usec) : 250=23.68%, 500=76.17% 00:18:57.947 lat (msec) : 2=0.01%, 50=0.13% 00:18:57.947 cpu : usr=2.28%, sys=5.04%, ctx=8421, majf=0, minf=1 00:18:57.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.947 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.947 issued rwts: total=8421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.947 00:18:57.947 Run status group 0 (all jobs): 00:18:57.947 READ: bw=18.4MiB/s (19.3MB/s), 133KiB/s-11.3MiB/s (136kB/s-11.9MB/s), io=67.9MiB (71.2MB), run=2900-3679msec 00:18:57.947 00:18:57.947 Disk stats (read/write): 00:18:57.947 nvme0n1: ios=3800/0, merge=0/0, ticks=4274/0, in_queue=4274, util=99.71% 00:18:57.947 nvme0n2: ios=161/0, merge=0/0, ticks=4740/0, in_queue=4740, util=99.57% 00:18:57.947 nvme0n3: ios=4973/0, merge=0/0, ticks=4180/0, in_queue=4180, util=99.22% 00:18:57.947 nvme0n4: ios=8280/0, merge=0/0, ticks=2636/0, in_queue=2636, util=96.74% 00:18:57.947 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:57.947 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:58.515 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.516 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:58.516 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.516 19:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:58.774 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.774 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:59.032 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:59.032 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3980014 00:18:59.032 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:59.032 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:59.290 nvmf hotplug test: fio failed as expected 00:18:59.290 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.549 rmmod nvme_tcp 00:18:59.549 rmmod nvme_fabrics 00:18:59.549 rmmod nvme_keyring 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3977988 ']' 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3977988 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3977988 ']' 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3977988 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3977988 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3977988' 00:18:59.549 killing process with pid 3977988 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3977988 00:18:59.549 19:49:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3977988 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.807 19:49:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.346 19:49:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:02.346 00:19:02.346 real 0m23.359s 00:19:02.346 user 1m21.660s 00:19:02.346 sys 0m6.809s 00:19:02.346 19:49:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:02.346 19:49:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 ************************************ 00:19:02.346 END TEST nvmf_fio_target 00:19:02.346 ************************************ 00:19:02.346 19:49:11 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:02.346 19:49:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:02.346 19:49:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:02.346 19:49:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 ************************************ 00:19:02.346 START TEST nvmf_bdevio 00:19:02.346 ************************************ 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:02.346 * Looking for test storage... 00:19:02.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.346 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.347 19:49:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:04.254 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:04.254 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:04.254 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:04.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:04.254 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:04.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:19:04.255 00:19:04.255 --- 10.0.0.2 ping statistics --- 00:19:04.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.255 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:19:04.255 00:19:04.255 --- 10.0.0.1 ping statistics --- 00:19:04.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.255 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3982720 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3982720 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3982720 ']' 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:04.255 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.255 [2024-07-25 19:49:13.471731] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:19:04.255 [2024-07-25 19:49:13.471802] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.255 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.255 [2024-07-25 19:49:13.536462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.255 [2024-07-25 19:49:13.622099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.255 [2024-07-25 19:49:13.622163] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.255 [2024-07-25 19:49:13.622176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.255 [2024-07-25 19:49:13.622203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.255 [2024-07-25 19:49:13.622213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.255 [2024-07-25 19:49:13.622306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:04.255 [2024-07-25 19:49:13.622350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:04.255 [2024-07-25 19:49:13.622402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:04.255 [2024-07-25 19:49:13.622404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.513 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.514 [2024-07-25 19:49:13.774898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.514 Malloc0 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.514 [2024-07-25 19:49:13.828255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:04.514 { 00:19:04.514 "params": { 00:19:04.514 "name": "Nvme$subsystem", 00:19:04.514 "trtype": "$TEST_TRANSPORT", 00:19:04.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.514 "adrfam": "ipv4", 00:19:04.514 "trsvcid": "$NVMF_PORT", 00:19:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.514 "hdgst": ${hdgst:-false}, 00:19:04.514 "ddgst": ${ddgst:-false} 00:19:04.514 }, 00:19:04.514 "method": "bdev_nvme_attach_controller" 00:19:04.514 } 00:19:04.514 EOF 00:19:04.514 )") 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:04.514 19:49:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:04.514 "params": { 00:19:04.514 "name": "Nvme1", 00:19:04.514 "trtype": "tcp", 00:19:04.514 "traddr": "10.0.0.2", 00:19:04.514 "adrfam": "ipv4", 00:19:04.514 "trsvcid": "4420", 00:19:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.514 "hdgst": false, 00:19:04.514 "ddgst": false 00:19:04.514 }, 00:19:04.514 "method": "bdev_nvme_attach_controller" 00:19:04.514 }' 00:19:04.514 [2024-07-25 19:49:13.875548] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:19:04.514 [2024-07-25 19:49:13.875628] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982756 ] 00:19:04.514 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.514 [2024-07-25 19:49:13.937635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:04.773 [2024-07-25 19:49:14.030131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.773 [2024-07-25 19:49:14.030183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.773 [2024-07-25 19:49:14.030186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.032 I/O targets: 00:19:05.032 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:05.032 00:19:05.032 00:19:05.032 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.032 http://cunit.sourceforge.net/ 00:19:05.032 00:19:05.032 00:19:05.032 Suite: bdevio tests on: Nvme1n1 00:19:05.032 Test: blockdev write read block ...passed 00:19:05.032 Test: blockdev write zeroes read block ...passed 00:19:05.032 Test: blockdev write zeroes read no split ...passed 00:19:05.032 Test: blockdev write zeroes read split ...passed 00:19:05.032 Test: blockdev write zeroes read split partial ...passed 00:19:05.032 Test: blockdev reset ...[2024-07-25 19:49:14.412430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.032 [2024-07-25 19:49:14.412541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd7f80 (9): Bad file descriptor 00:19:05.291 [2024-07-25 19:49:14.510296] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:05.291 passed 00:19:05.292 Test: blockdev write read 8 blocks ...passed 00:19:05.292 Test: blockdev write read size > 128k ...passed 00:19:05.292 Test: blockdev write read invalid size ...passed 00:19:05.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:05.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:05.292 Test: blockdev write read max offset ...passed 00:19:05.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:05.292 Test: blockdev writev readv 8 blocks ...passed 00:19:05.292 Test: blockdev writev readv 30 x 1block ...passed 00:19:05.550 Test: blockdev writev readv block ...passed 00:19:05.550 Test: blockdev writev readv size > 128k ...passed 00:19:05.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:05.550 Test: blockdev comparev and writev ...[2024-07-25 19:49:14.765502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.765537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.765561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.765579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.765886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.765910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.765932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.765948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.766269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.766294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.766315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.766330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.766640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.766664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.766686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.550 [2024-07-25 19:49:14.766702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:05.550 passed 00:19:05.550 Test: blockdev nvme passthru rw ...passed 00:19:05.550 Test: blockdev nvme passthru vendor specific ...[2024-07-25 19:49:14.850363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.550 [2024-07-25 19:49:14.850398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.850554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.550 [2024-07-25 19:49:14.850578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.850730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.550 [2024-07-25 19:49:14.850753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:05.550 [2024-07-25 19:49:14.850902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.550 [2024-07-25 19:49:14.850925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:05.550 passed 00:19:05.550 Test: blockdev nvme admin passthru ...passed 00:19:05.550 Test: blockdev copy ...passed 00:19:05.550 00:19:05.550 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.550 suites 1 1 n/a 0 0 00:19:05.550 tests 23 23 23 0 0 00:19:05.550 asserts 152 152 152 0 n/a 00:19:05.550 00:19:05.550 Elapsed time = 1.308 seconds 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.808 rmmod nvme_tcp 00:19:05.808 rmmod nvme_fabrics 00:19:05.808 rmmod nvme_keyring 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3982720 ']' 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3982720 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3982720 ']' 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3982720 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3982720 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3982720' 00:19:05.808 killing process with pid 3982720 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3982720 00:19:05.808 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3982720 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.065 19:49:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.602 19:49:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.602 00:19:08.602 real 0m6.270s 00:19:08.602 user 0m10.163s 00:19:08.602 sys 0m2.089s 00:19:08.602 19:49:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.602 19:49:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.602 ************************************ 00:19:08.602 END TEST nvmf_bdevio 00:19:08.602 ************************************ 00:19:08.602 19:49:17 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:08.602 19:49:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:08.602 19:49:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:08.602 19:49:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.602 ************************************ 00:19:08.602 START TEST nvmf_auth_target 00:19:08.602 ************************************ 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:08.602 * Looking for test storage... 00:19:08.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.602 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.603 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:10.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:10.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:10.506 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:10.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:10.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:19:10.507 00:19:10.507 --- 10.0.0.2 ping statistics --- 00:19:10.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.507 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:19:10.507 00:19:10.507 --- 10.0.0.1 ping statistics --- 00:19:10.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.507 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3984826 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3984826 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3984826 ']' 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:10.507 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.765 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.765 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:10.765 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.765 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.765 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3984960 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=638853068aad3e84981921362eebfdc1531e8f639c2495a3 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MW9 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 638853068aad3e84981921362eebfdc1531e8f639c2495a3 0 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 638853068aad3e84981921362eebfdc1531e8f639c2495a3 0 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=638853068aad3e84981921362eebfdc1531e8f639c2495a3 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MW9 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MW9 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.MW9 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c0889dcd988dac516ffe403c110b1cd79c9132117ca8e1025f49ffb3ae509458 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D8k 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c0889dcd988dac516ffe403c110b1cd79c9132117ca8e1025f49ffb3ae509458 3 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c0889dcd988dac516ffe403c110b1cd79c9132117ca8e1025f49ffb3ae509458 3 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c0889dcd988dac516ffe403c110b1cd79c9132117ca8e1025f49ffb3ae509458 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D8k 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D8k 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.D8k 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b7def7145b70e650dd7180cd9869cdd 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5VG 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b7def7145b70e650dd7180cd9869cdd 1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b7def7145b70e650dd7180cd9869cdd 1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b7def7145b70e650dd7180cd9869cdd 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5VG 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5VG 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.5VG 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa12b3cac7d2859a6ef25af4b0a3f06172b8663ef5d32ebe 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ECn 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa12b3cac7d2859a6ef25af4b0a3f06172b8663ef5d32ebe 2 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa12b3cac7d2859a6ef25af4b0a3f06172b8663ef5d32ebe 2 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa12b3cac7d2859a6ef25af4b0a3f06172b8663ef5d32ebe 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ECn 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ECn 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ECn 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4c5cca27bbf3bcdf88d86c5553753dccbcf62400ab107886 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ntm 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4c5cca27bbf3bcdf88d86c5553753dccbcf62400ab107886 2 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4c5cca27bbf3bcdf88d86c5553753dccbcf62400ab107886 2 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4c5cca27bbf3bcdf88d86c5553753dccbcf62400ab107886 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ntm 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ntm 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ntm 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55f759e50efe54da59ebd3eb77a069b9 00:19:11.025 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SwV 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55f759e50efe54da59ebd3eb77a069b9 1 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55f759e50efe54da59ebd3eb77a069b9 1 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55f759e50efe54da59ebd3eb77a069b9 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SwV 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SwV 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.SwV 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:11.283 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e6aecfe02e2efd263147f8c6478ddbc2228f119124516c0002d78d544dd4583 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FQ5 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e6aecfe02e2efd263147f8c6478ddbc2228f119124516c0002d78d544dd4583 3 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e6aecfe02e2efd263147f8c6478ddbc2228f119124516c0002d78d544dd4583 3 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e6aecfe02e2efd263147f8c6478ddbc2228f119124516c0002d78d544dd4583 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FQ5 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FQ5 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.FQ5 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3984826 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3984826 ']' 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:11.284 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.541 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3984960 /var/tmp/host.sock 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3984960 ']' 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:11.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:11.542 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.799 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:11.799 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MW9 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MW9 00:19:11.800 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MW9 00:19:12.057 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.D8k ]] 00:19:12.058 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D8k 00:19:12.058 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.058 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.058 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.058 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D8k 00:19:12.058 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D8k 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5VG 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5VG 00:19:12.316 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5VG 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ECn ]] 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ECn 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ECn 00:19:12.574 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ECn 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ntm 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ntm 00:19:12.832 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ntm 00:19:13.089 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.SwV ]] 00:19:13.090 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SwV 00:19:13.090 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.090 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.090 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.090 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SwV 00:19:13.090 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SwV 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FQ5 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FQ5 00:19:13.348 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FQ5 00:19:13.605 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:13.605 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:13.605 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.605 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.605 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.605 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.864 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.121 00:19:14.121 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.121 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.121 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.377 { 00:19:14.377 "cntlid": 1, 00:19:14.377 "qid": 0, 00:19:14.377 "state": "enabled", 00:19:14.377 "listen_address": { 00:19:14.377 "trtype": "TCP", 00:19:14.377 "adrfam": "IPv4", 00:19:14.377 "traddr": "10.0.0.2", 00:19:14.377 "trsvcid": "4420" 00:19:14.377 }, 00:19:14.377 "peer_address": { 00:19:14.377 "trtype": "TCP", 00:19:14.377 "adrfam": "IPv4", 00:19:14.377 "traddr": "10.0.0.1", 00:19:14.377 "trsvcid": "54080" 00:19:14.377 }, 00:19:14.377 "auth": { 00:19:14.377 "state": "completed", 00:19:14.377 "digest": "sha256", 00:19:14.377 "dhgroup": "null" 00:19:14.377 } 00:19:14.377 } 00:19:14.377 ]' 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.377 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.635 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.010 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.011 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.268 00:19:16.268 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.268 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.268 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.526 { 00:19:16.526 "cntlid": 3, 00:19:16.526 "qid": 0, 00:19:16.526 "state": "enabled", 00:19:16.526 "listen_address": { 00:19:16.526 "trtype": "TCP", 00:19:16.526 "adrfam": "IPv4", 00:19:16.526 "traddr": "10.0.0.2", 00:19:16.526 "trsvcid": "4420" 00:19:16.526 }, 00:19:16.526 "peer_address": { 00:19:16.526 "trtype": "TCP", 00:19:16.526 "adrfam": "IPv4", 00:19:16.526 "traddr": "10.0.0.1", 00:19:16.526 "trsvcid": "54116" 00:19:16.526 }, 00:19:16.526 "auth": { 00:19:16.526 "state": "completed", 00:19:16.526 "digest": "sha256", 00:19:16.526 "dhgroup": "null" 00:19:16.526 } 00:19:16.526 } 00:19:16.526 ]' 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.526 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.784 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.784 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.784 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.042 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.979 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.237 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.495 00:19:18.495 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.495 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.495 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.753 { 00:19:18.753 "cntlid": 5, 00:19:18.753 "qid": 0, 00:19:18.753 "state": "enabled", 00:19:18.753 "listen_address": { 00:19:18.753 "trtype": "TCP", 00:19:18.753 "adrfam": "IPv4", 00:19:18.753 "traddr": "10.0.0.2", 00:19:18.753 "trsvcid": "4420" 00:19:18.753 }, 00:19:18.753 "peer_address": { 00:19:18.753 "trtype": "TCP", 00:19:18.753 "adrfam": "IPv4", 00:19:18.753 "traddr": "10.0.0.1", 00:19:18.753 "trsvcid": "54134" 00:19:18.753 }, 00:19:18.753 "auth": { 00:19:18.753 "state": "completed", 00:19:18.753 "digest": "sha256", 00:19:18.753 "dhgroup": "null" 00:19:18.753 } 00:19:18.753 } 00:19:18.753 ]' 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.753 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.011 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.944 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.202 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.459 00:19:20.459 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.459 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.459 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.717 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.717 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.717 19:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.717 19:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.976 { 00:19:20.976 "cntlid": 7, 00:19:20.976 "qid": 0, 00:19:20.976 "state": "enabled", 00:19:20.976 "listen_address": { 00:19:20.976 "trtype": "TCP", 00:19:20.976 "adrfam": "IPv4", 00:19:20.976 "traddr": "10.0.0.2", 00:19:20.976 "trsvcid": "4420" 00:19:20.976 }, 00:19:20.976 "peer_address": { 00:19:20.976 "trtype": "TCP", 00:19:20.976 "adrfam": "IPv4", 00:19:20.976 "traddr": "10.0.0.1", 00:19:20.976 "trsvcid": "49430" 00:19:20.976 }, 00:19:20.976 "auth": { 00:19:20.976 "state": "completed", 00:19:20.976 "digest": "sha256", 00:19:20.976 "dhgroup": "null" 00:19:20.976 } 00:19:20.976 } 00:19:20.976 ]' 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.976 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.233 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.170 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.428 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.692 00:19:23.000 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.000 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.000 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.261 { 00:19:23.261 "cntlid": 9, 00:19:23.261 "qid": 0, 00:19:23.261 "state": "enabled", 00:19:23.261 "listen_address": { 00:19:23.261 "trtype": "TCP", 00:19:23.261 "adrfam": "IPv4", 00:19:23.261 "traddr": "10.0.0.2", 00:19:23.261 "trsvcid": "4420" 00:19:23.261 }, 00:19:23.261 "peer_address": { 00:19:23.261 "trtype": "TCP", 00:19:23.261 "adrfam": "IPv4", 00:19:23.261 "traddr": "10.0.0.1", 00:19:23.261 "trsvcid": "49452" 00:19:23.261 }, 00:19:23.261 "auth": { 00:19:23.261 "state": "completed", 00:19:23.261 "digest": "sha256", 00:19:23.261 "dhgroup": "ffdhe2048" 00:19:23.261 } 00:19:23.261 } 00:19:23.261 ]' 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.261 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.519 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.454 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.712 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.970 00:19:24.970 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.970 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.970 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.227 { 00:19:25.227 "cntlid": 11, 00:19:25.227 "qid": 0, 00:19:25.227 "state": "enabled", 00:19:25.227 "listen_address": { 00:19:25.227 "trtype": "TCP", 00:19:25.227 "adrfam": "IPv4", 00:19:25.227 "traddr": "10.0.0.2", 00:19:25.227 "trsvcid": "4420" 00:19:25.227 }, 00:19:25.227 "peer_address": { 00:19:25.227 "trtype": "TCP", 00:19:25.227 "adrfam": "IPv4", 00:19:25.227 "traddr": "10.0.0.1", 00:19:25.227 "trsvcid": "49474" 00:19:25.227 }, 00:19:25.227 "auth": { 00:19:25.227 "state": "completed", 00:19:25.227 "digest": "sha256", 00:19:25.227 "dhgroup": "ffdhe2048" 00:19:25.227 } 00:19:25.227 } 00:19:25.227 ]' 00:19:25.227 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.484 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.741 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:19:26.676 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.676 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.677 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.677 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.677 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.677 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.677 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.677 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.935 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.194 00:19:27.194 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.194 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.194 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.453 { 00:19:27.453 "cntlid": 13, 00:19:27.453 "qid": 0, 00:19:27.453 "state": "enabled", 00:19:27.453 "listen_address": { 00:19:27.453 "trtype": "TCP", 00:19:27.453 "adrfam": "IPv4", 00:19:27.453 "traddr": "10.0.0.2", 00:19:27.453 "trsvcid": "4420" 00:19:27.453 }, 00:19:27.453 "peer_address": { 00:19:27.453 "trtype": "TCP", 00:19:27.453 "adrfam": "IPv4", 00:19:27.453 "traddr": "10.0.0.1", 00:19:27.453 "trsvcid": "49496" 00:19:27.453 }, 00:19:27.453 "auth": { 00:19:27.453 "state": "completed", 00:19:27.453 "digest": "sha256", 00:19:27.453 "dhgroup": "ffdhe2048" 00:19:27.453 } 00:19:27.453 } 00:19:27.453 ]' 00:19:27.453 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.710 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.967 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.900 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.157 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.414 00:19:29.414 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.414 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.414 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.672 { 00:19:29.672 "cntlid": 15, 00:19:29.672 "qid": 0, 00:19:29.672 "state": "enabled", 00:19:29.672 "listen_address": { 00:19:29.672 "trtype": "TCP", 00:19:29.672 "adrfam": "IPv4", 00:19:29.672 "traddr": "10.0.0.2", 00:19:29.672 "trsvcid": "4420" 00:19:29.672 }, 00:19:29.672 "peer_address": { 00:19:29.672 "trtype": "TCP", 00:19:29.672 "adrfam": "IPv4", 00:19:29.672 "traddr": "10.0.0.1", 00:19:29.672 "trsvcid": "49524" 00:19:29.672 }, 00:19:29.672 "auth": { 00:19:29.672 "state": "completed", 00:19:29.672 "digest": "sha256", 00:19:29.672 "dhgroup": "ffdhe2048" 00:19:29.672 } 00:19:29.672 } 00:19:29.672 ]' 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.672 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.931 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.931 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.931 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.931 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.931 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.188 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.124 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.381 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.382 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.639 00:19:31.639 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.639 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.639 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.897 { 00:19:31.897 "cntlid": 17, 00:19:31.897 "qid": 0, 00:19:31.897 "state": "enabled", 00:19:31.897 "listen_address": { 00:19:31.897 "trtype": "TCP", 00:19:31.897 "adrfam": "IPv4", 00:19:31.897 "traddr": "10.0.0.2", 00:19:31.897 "trsvcid": "4420" 00:19:31.897 }, 00:19:31.897 "peer_address": { 00:19:31.897 "trtype": "TCP", 00:19:31.897 "adrfam": "IPv4", 00:19:31.897 "traddr": "10.0.0.1", 00:19:31.897 "trsvcid": "50626" 00:19:31.897 }, 00:19:31.897 "auth": { 00:19:31.897 "state": "completed", 00:19:31.897 "digest": "sha256", 00:19:31.897 "dhgroup": "ffdhe3072" 00:19:31.897 } 00:19:31.897 } 00:19:31.897 ]' 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.897 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.155 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:19:33.090 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.090 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.090 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.090 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.348 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.348 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.348 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.348 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.608 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.609 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.867 00:19:33.867 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.867 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.867 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.123 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.123 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.123 19:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.123 19:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.123 19:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.123 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.123 { 00:19:34.123 "cntlid": 19, 00:19:34.123 "qid": 0, 00:19:34.123 "state": "enabled", 00:19:34.123 "listen_address": { 00:19:34.123 "trtype": "TCP", 00:19:34.123 "adrfam": "IPv4", 00:19:34.123 "traddr": "10.0.0.2", 00:19:34.123 "trsvcid": "4420" 00:19:34.123 }, 00:19:34.123 "peer_address": { 00:19:34.123 "trtype": "TCP", 00:19:34.123 "adrfam": "IPv4", 00:19:34.124 "traddr": "10.0.0.1", 00:19:34.124 "trsvcid": "50666" 00:19:34.124 }, 00:19:34.124 "auth": { 00:19:34.124 "state": "completed", 00:19:34.124 "digest": "sha256", 00:19:34.124 "dhgroup": "ffdhe3072" 00:19:34.124 } 00:19:34.124 } 00:19:34.124 ]' 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.124 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.382 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:19:35.316 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.316 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.316 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.316 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.575 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.575 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.575 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.575 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.833 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.092 00:19:36.092 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.092 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.092 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.350 { 00:19:36.350 "cntlid": 21, 00:19:36.350 "qid": 0, 00:19:36.350 "state": "enabled", 00:19:36.350 "listen_address": { 00:19:36.350 "trtype": "TCP", 00:19:36.350 "adrfam": "IPv4", 00:19:36.350 "traddr": "10.0.0.2", 00:19:36.350 "trsvcid": "4420" 00:19:36.350 }, 00:19:36.350 "peer_address": { 00:19:36.350 "trtype": "TCP", 00:19:36.350 "adrfam": "IPv4", 00:19:36.350 "traddr": "10.0.0.1", 00:19:36.350 "trsvcid": "50686" 00:19:36.350 }, 00:19:36.350 "auth": { 00:19:36.350 "state": "completed", 00:19:36.350 "digest": "sha256", 00:19:36.350 "dhgroup": "ffdhe3072" 00:19:36.350 } 00:19:36.350 } 00:19:36.350 ]' 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.350 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.608 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.608 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.608 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.608 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.608 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.867 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:19:37.802 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.802 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.802 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.802 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.802 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.802 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.802 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.802 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.088 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.346 00:19:38.346 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.346 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.346 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.605 { 00:19:38.605 "cntlid": 23, 00:19:38.605 "qid": 0, 00:19:38.605 "state": "enabled", 00:19:38.605 "listen_address": { 00:19:38.605 "trtype": "TCP", 00:19:38.605 "adrfam": "IPv4", 00:19:38.605 "traddr": "10.0.0.2", 00:19:38.605 "trsvcid": "4420" 00:19:38.605 }, 00:19:38.605 "peer_address": { 00:19:38.605 "trtype": "TCP", 00:19:38.605 "adrfam": "IPv4", 00:19:38.605 "traddr": "10.0.0.1", 00:19:38.605 "trsvcid": "50710" 00:19:38.605 }, 00:19:38.605 "auth": { 00:19:38.605 "state": "completed", 00:19:38.605 "digest": "sha256", 00:19:38.605 "dhgroup": "ffdhe3072" 00:19:38.605 } 00:19:38.605 } 00:19:38.605 ]' 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.605 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.863 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:19:39.799 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.799 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.799 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.799 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.799 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.799 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.800 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.800 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.800 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.058 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.626 00:19:40.626 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.626 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.626 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.884 { 00:19:40.884 "cntlid": 25, 00:19:40.884 "qid": 0, 00:19:40.884 "state": "enabled", 00:19:40.884 "listen_address": { 00:19:40.884 "trtype": "TCP", 00:19:40.884 "adrfam": "IPv4", 00:19:40.884 "traddr": "10.0.0.2", 00:19:40.884 "trsvcid": "4420" 00:19:40.884 }, 00:19:40.884 "peer_address": { 00:19:40.884 "trtype": "TCP", 00:19:40.884 "adrfam": "IPv4", 00:19:40.884 "traddr": "10.0.0.1", 00:19:40.884 "trsvcid": "58198" 00:19:40.884 }, 00:19:40.884 "auth": { 00:19:40.884 "state": "completed", 00:19:40.884 "digest": "sha256", 00:19:40.884 "dhgroup": "ffdhe4096" 00:19:40.884 } 00:19:40.884 } 00:19:40.884 ]' 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.884 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.143 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.078 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.336 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.337 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.904 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.904 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.905 { 00:19:42.905 "cntlid": 27, 00:19:42.905 "qid": 0, 00:19:42.905 "state": "enabled", 00:19:42.905 "listen_address": { 00:19:42.905 "trtype": "TCP", 00:19:42.905 "adrfam": "IPv4", 00:19:42.905 "traddr": "10.0.0.2", 00:19:42.905 "trsvcid": "4420" 00:19:42.905 }, 00:19:42.905 "peer_address": { 00:19:42.905 "trtype": "TCP", 00:19:42.905 "adrfam": "IPv4", 00:19:42.905 "traddr": "10.0.0.1", 00:19:42.905 "trsvcid": "58218" 00:19:42.905 }, 00:19:42.905 "auth": { 00:19:42.905 "state": "completed", 00:19:42.905 "digest": "sha256", 00:19:42.905 "dhgroup": "ffdhe4096" 00:19:42.905 } 00:19:42.905 } 00:19:42.905 ]' 00:19:42.905 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.163 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.421 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.358 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.616 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.181 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.181 { 00:19:45.181 "cntlid": 29, 00:19:45.181 "qid": 0, 00:19:45.181 "state": "enabled", 00:19:45.181 "listen_address": { 00:19:45.181 "trtype": "TCP", 00:19:45.181 "adrfam": "IPv4", 00:19:45.181 "traddr": "10.0.0.2", 00:19:45.181 "trsvcid": "4420" 00:19:45.181 }, 00:19:45.181 "peer_address": { 00:19:45.181 "trtype": "TCP", 00:19:45.181 "adrfam": "IPv4", 00:19:45.181 "traddr": "10.0.0.1", 00:19:45.181 "trsvcid": "58246" 00:19:45.181 }, 00:19:45.181 "auth": { 00:19:45.181 "state": "completed", 00:19:45.181 "digest": "sha256", 00:19:45.181 "dhgroup": "ffdhe4096" 00:19:45.181 } 00:19:45.181 } 00:19:45.181 ]' 00:19:45.181 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.438 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.696 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.629 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.886 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.887 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.451 00:19:47.451 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.451 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.451 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.708 { 00:19:47.708 "cntlid": 31, 00:19:47.708 "qid": 0, 00:19:47.708 "state": "enabled", 00:19:47.708 "listen_address": { 00:19:47.708 "trtype": "TCP", 00:19:47.708 "adrfam": "IPv4", 00:19:47.708 "traddr": "10.0.0.2", 00:19:47.708 "trsvcid": "4420" 00:19:47.708 }, 00:19:47.708 "peer_address": { 00:19:47.708 "trtype": "TCP", 00:19:47.708 "adrfam": "IPv4", 00:19:47.708 "traddr": "10.0.0.1", 00:19:47.708 "trsvcid": "58274" 00:19:47.708 }, 00:19:47.708 "auth": { 00:19:47.708 "state": "completed", 00:19:47.708 "digest": "sha256", 00:19:47.708 "dhgroup": "ffdhe4096" 00:19:47.708 } 00:19:47.708 } 00:19:47.708 ]' 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.708 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.708 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.708 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.708 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.708 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.708 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.966 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.898 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.156 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.720 00:19:49.720 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.720 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.720 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.978 { 00:19:49.978 "cntlid": 33, 00:19:49.978 "qid": 0, 00:19:49.978 "state": "enabled", 00:19:49.978 "listen_address": { 00:19:49.978 "trtype": "TCP", 00:19:49.978 "adrfam": "IPv4", 00:19:49.978 "traddr": "10.0.0.2", 00:19:49.978 "trsvcid": "4420" 00:19:49.978 }, 00:19:49.978 "peer_address": { 00:19:49.978 "trtype": "TCP", 00:19:49.978 "adrfam": "IPv4", 00:19:49.978 "traddr": "10.0.0.1", 00:19:49.978 "trsvcid": "58296" 00:19:49.978 }, 00:19:49.978 "auth": { 00:19:49.978 "state": "completed", 00:19:49.978 "digest": "sha256", 00:19:49.978 "dhgroup": "ffdhe6144" 00:19:49.978 } 00:19:49.978 } 00:19:49.978 ]' 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.978 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.236 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.236 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.236 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.493 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.425 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.683 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.249 00:19:52.249 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.249 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.249 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.505 { 00:19:52.505 "cntlid": 35, 00:19:52.505 "qid": 0, 00:19:52.505 "state": "enabled", 00:19:52.505 "listen_address": { 00:19:52.505 "trtype": "TCP", 00:19:52.505 "adrfam": "IPv4", 00:19:52.505 "traddr": "10.0.0.2", 00:19:52.505 "trsvcid": "4420" 00:19:52.505 }, 00:19:52.505 "peer_address": { 00:19:52.505 "trtype": "TCP", 00:19:52.505 "adrfam": "IPv4", 00:19:52.505 "traddr": "10.0.0.1", 00:19:52.505 "trsvcid": "54546" 00:19:52.505 }, 00:19:52.505 "auth": { 00:19:52.505 "state": "completed", 00:19:52.505 "digest": "sha256", 00:19:52.505 "dhgroup": "ffdhe6144" 00:19:52.505 } 00:19:52.505 } 00:19:52.505 ]' 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.505 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.763 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.697 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.261 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.262 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.262 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.262 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.826 00:19:54.826 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.826 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.826 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.826 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.826 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.826 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.826 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.826 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.827 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.827 { 00:19:54.827 "cntlid": 37, 00:19:54.827 "qid": 0, 00:19:54.827 "state": "enabled", 00:19:54.827 "listen_address": { 00:19:54.827 "trtype": "TCP", 00:19:54.827 "adrfam": "IPv4", 00:19:54.827 "traddr": "10.0.0.2", 00:19:54.827 "trsvcid": "4420" 00:19:54.827 }, 00:19:54.827 "peer_address": { 00:19:54.827 "trtype": "TCP", 00:19:54.827 "adrfam": "IPv4", 00:19:54.827 "traddr": "10.0.0.1", 00:19:54.827 "trsvcid": "54566" 00:19:54.827 }, 00:19:54.827 "auth": { 00:19:54.827 "state": "completed", 00:19:54.827 "digest": "sha256", 00:19:54.827 "dhgroup": "ffdhe6144" 00:19:54.827 } 00:19:54.827 } 00:19:54.827 ]' 00:19:54.827 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.084 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.342 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:19:56.272 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.273 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.531 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.532 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.532 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.532 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.532 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.532 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.133 00:19:57.133 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.133 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.133 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.390 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.390 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.390 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.390 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.390 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.390 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.390 { 00:19:57.390 "cntlid": 39, 00:19:57.390 "qid": 0, 00:19:57.390 "state": "enabled", 00:19:57.390 "listen_address": { 00:19:57.390 "trtype": "TCP", 00:19:57.390 "adrfam": "IPv4", 00:19:57.390 "traddr": "10.0.0.2", 00:19:57.390 "trsvcid": "4420" 00:19:57.390 }, 00:19:57.390 "peer_address": { 00:19:57.390 "trtype": "TCP", 00:19:57.390 "adrfam": "IPv4", 00:19:57.390 "traddr": "10.0.0.1", 00:19:57.390 "trsvcid": "54598" 00:19:57.390 }, 00:19:57.390 "auth": { 00:19:57.390 "state": "completed", 00:19:57.390 "digest": "sha256", 00:19:57.390 "dhgroup": "ffdhe6144" 00:19:57.391 } 00:19:57.391 } 00:19:57.391 ]' 00:19:57.391 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.648 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.905 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.839 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.096 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.097 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.029 00:20:00.029 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.029 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.029 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.029 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.030 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.030 19:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.030 19:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.030 19:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.030 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.030 { 00:20:00.030 "cntlid": 41, 00:20:00.030 "qid": 0, 00:20:00.030 "state": "enabled", 00:20:00.030 "listen_address": { 00:20:00.030 "trtype": "TCP", 00:20:00.030 "adrfam": "IPv4", 00:20:00.030 "traddr": "10.0.0.2", 00:20:00.030 "trsvcid": "4420" 00:20:00.030 }, 00:20:00.030 "peer_address": { 00:20:00.030 "trtype": "TCP", 00:20:00.030 "adrfam": "IPv4", 00:20:00.030 "traddr": "10.0.0.1", 00:20:00.030 "trsvcid": "54622" 00:20:00.030 }, 00:20:00.030 "auth": { 00:20:00.030 "state": "completed", 00:20:00.030 "digest": "sha256", 00:20:00.030 "dhgroup": "ffdhe8192" 00:20:00.030 } 00:20:00.030 } 00:20:00.030 ]' 00:20:00.030 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.287 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.545 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.479 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.737 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.670 00:20:02.670 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.670 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.670 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.928 { 00:20:02.928 "cntlid": 43, 00:20:02.928 "qid": 0, 00:20:02.928 "state": "enabled", 00:20:02.928 "listen_address": { 00:20:02.928 "trtype": "TCP", 00:20:02.928 "adrfam": "IPv4", 00:20:02.928 "traddr": "10.0.0.2", 00:20:02.928 "trsvcid": "4420" 00:20:02.928 }, 00:20:02.928 "peer_address": { 00:20:02.928 "trtype": "TCP", 00:20:02.928 "adrfam": "IPv4", 00:20:02.928 "traddr": "10.0.0.1", 00:20:02.928 "trsvcid": "54588" 00:20:02.928 }, 00:20:02.928 "auth": { 00:20:02.928 "state": "completed", 00:20:02.928 "digest": "sha256", 00:20:02.928 "dhgroup": "ffdhe8192" 00:20:02.928 } 00:20:02.928 } 00:20:02.928 ]' 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.928 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.186 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.119 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.377 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.378 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.378 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.378 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.378 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.310 00:20:05.310 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.310 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.310 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.568 { 00:20:05.568 "cntlid": 45, 00:20:05.568 "qid": 0, 00:20:05.568 "state": "enabled", 00:20:05.568 "listen_address": { 00:20:05.568 "trtype": "TCP", 00:20:05.568 "adrfam": "IPv4", 00:20:05.568 "traddr": "10.0.0.2", 00:20:05.568 "trsvcid": "4420" 00:20:05.568 }, 00:20:05.568 "peer_address": { 00:20:05.568 "trtype": "TCP", 00:20:05.568 "adrfam": "IPv4", 00:20:05.568 "traddr": "10.0.0.1", 00:20:05.568 "trsvcid": "54616" 00:20:05.568 }, 00:20:05.568 "auth": { 00:20:05.568 "state": "completed", 00:20:05.568 "digest": "sha256", 00:20:05.568 "dhgroup": "ffdhe8192" 00:20:05.568 } 00:20:05.568 } 00:20:05.568 ]' 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.568 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.826 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.826 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.826 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.826 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.826 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.083 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.017 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.275 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.208 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.208 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.209 { 00:20:08.209 "cntlid": 47, 00:20:08.209 "qid": 0, 00:20:08.209 "state": "enabled", 00:20:08.209 "listen_address": { 00:20:08.209 "trtype": "TCP", 00:20:08.209 "adrfam": "IPv4", 00:20:08.209 "traddr": "10.0.0.2", 00:20:08.209 "trsvcid": "4420" 00:20:08.209 }, 00:20:08.209 "peer_address": { 00:20:08.209 "trtype": "TCP", 00:20:08.209 "adrfam": "IPv4", 00:20:08.209 "traddr": "10.0.0.1", 00:20:08.209 "trsvcid": "54654" 00:20:08.209 }, 00:20:08.209 "auth": { 00:20:08.209 "state": "completed", 00:20:08.209 "digest": "sha256", 00:20:08.209 "dhgroup": "ffdhe8192" 00:20:08.209 } 00:20:08.209 } 00:20:08.209 ]' 00:20:08.209 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.209 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.209 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.466 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.466 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.466 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.466 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.466 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.724 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.658 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.915 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.916 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.173 00:20:10.173 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.173 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.173 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.431 { 00:20:10.431 "cntlid": 49, 00:20:10.431 "qid": 0, 00:20:10.431 "state": "enabled", 00:20:10.431 "listen_address": { 00:20:10.431 "trtype": "TCP", 00:20:10.431 "adrfam": "IPv4", 00:20:10.431 "traddr": "10.0.0.2", 00:20:10.431 "trsvcid": "4420" 00:20:10.431 }, 00:20:10.431 "peer_address": { 00:20:10.431 "trtype": "TCP", 00:20:10.431 "adrfam": "IPv4", 00:20:10.431 "traddr": "10.0.0.1", 00:20:10.431 "trsvcid": "46842" 00:20:10.431 }, 00:20:10.431 "auth": { 00:20:10.431 "state": "completed", 00:20:10.431 "digest": "sha384", 00:20:10.431 "dhgroup": "null" 00:20:10.431 } 00:20:10.431 } 00:20:10.431 ]' 00:20:10.431 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.689 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.947 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.880 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.138 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.704 00:20:12.704 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.704 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.704 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.704 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.704 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.704 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.705 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.705 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.705 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.705 { 00:20:12.705 "cntlid": 51, 00:20:12.705 "qid": 0, 00:20:12.705 "state": "enabled", 00:20:12.705 "listen_address": { 00:20:12.705 "trtype": "TCP", 00:20:12.705 "adrfam": "IPv4", 00:20:12.705 "traddr": "10.0.0.2", 00:20:12.705 "trsvcid": "4420" 00:20:12.705 }, 00:20:12.705 "peer_address": { 00:20:12.705 "trtype": "TCP", 00:20:12.705 "adrfam": "IPv4", 00:20:12.705 "traddr": "10.0.0.1", 00:20:12.705 "trsvcid": "46876" 00:20:12.705 }, 00:20:12.705 "auth": { 00:20:12.705 "state": "completed", 00:20:12.705 "digest": "sha384", 00:20:12.705 "dhgroup": "null" 00:20:12.705 } 00:20:12.705 } 00:20:12.705 ]' 00:20:12.705 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.962 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.220 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.152 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.422 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.701 00:20:14.701 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.701 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.701 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.959 { 00:20:14.959 "cntlid": 53, 00:20:14.959 "qid": 0, 00:20:14.959 "state": "enabled", 00:20:14.959 "listen_address": { 00:20:14.959 "trtype": "TCP", 00:20:14.959 "adrfam": "IPv4", 00:20:14.959 "traddr": "10.0.0.2", 00:20:14.959 "trsvcid": "4420" 00:20:14.959 }, 00:20:14.959 "peer_address": { 00:20:14.959 "trtype": "TCP", 00:20:14.959 "adrfam": "IPv4", 00:20:14.959 "traddr": "10.0.0.1", 00:20:14.959 "trsvcid": "46912" 00:20:14.959 }, 00:20:14.959 "auth": { 00:20:14.959 "state": "completed", 00:20:14.959 "digest": "sha384", 00:20:14.959 "dhgroup": "null" 00:20:14.959 } 00:20:14.959 } 00:20:14.959 ]' 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.959 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.216 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:15.216 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.216 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.216 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.216 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.474 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.408 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.666 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.924 00:20:16.924 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.924 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.924 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.182 { 00:20:17.182 "cntlid": 55, 00:20:17.182 "qid": 0, 00:20:17.182 "state": "enabled", 00:20:17.182 "listen_address": { 00:20:17.182 "trtype": "TCP", 00:20:17.182 "adrfam": "IPv4", 00:20:17.182 "traddr": "10.0.0.2", 00:20:17.182 "trsvcid": "4420" 00:20:17.182 }, 00:20:17.182 "peer_address": { 00:20:17.182 "trtype": "TCP", 00:20:17.182 "adrfam": "IPv4", 00:20:17.182 "traddr": "10.0.0.1", 00:20:17.182 "trsvcid": "46936" 00:20:17.182 }, 00:20:17.182 "auth": { 00:20:17.182 "state": "completed", 00:20:17.182 "digest": "sha384", 00:20:17.182 "dhgroup": "null" 00:20:17.182 } 00:20:17.182 } 00:20:17.182 ]' 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.182 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.439 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.373 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.631 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.197 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.197 { 00:20:19.197 "cntlid": 57, 00:20:19.197 "qid": 0, 00:20:19.197 "state": "enabled", 00:20:19.197 "listen_address": { 00:20:19.197 "trtype": "TCP", 00:20:19.197 "adrfam": "IPv4", 00:20:19.197 "traddr": "10.0.0.2", 00:20:19.197 "trsvcid": "4420" 00:20:19.197 }, 00:20:19.197 "peer_address": { 00:20:19.197 "trtype": "TCP", 00:20:19.197 "adrfam": "IPv4", 00:20:19.197 "traddr": "10.0.0.1", 00:20:19.197 "trsvcid": "46968" 00:20:19.197 }, 00:20:19.197 "auth": { 00:20:19.197 "state": "completed", 00:20:19.197 "digest": "sha384", 00:20:19.197 "dhgroup": "ffdhe2048" 00:20:19.197 } 00:20:19.197 } 00:20:19.197 ]' 00:20:19.197 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.455 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.714 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.646 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.904 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.161 00:20:21.161 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.161 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.161 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.418 { 00:20:21.418 "cntlid": 59, 00:20:21.418 "qid": 0, 00:20:21.418 "state": "enabled", 00:20:21.418 "listen_address": { 00:20:21.418 "trtype": "TCP", 00:20:21.418 "adrfam": "IPv4", 00:20:21.418 "traddr": "10.0.0.2", 00:20:21.418 "trsvcid": "4420" 00:20:21.418 }, 00:20:21.418 "peer_address": { 00:20:21.418 "trtype": "TCP", 00:20:21.418 "adrfam": "IPv4", 00:20:21.418 "traddr": "10.0.0.1", 00:20:21.418 "trsvcid": "35882" 00:20:21.418 }, 00:20:21.418 "auth": { 00:20:21.418 "state": "completed", 00:20:21.418 "digest": "sha384", 00:20:21.418 "dhgroup": "ffdhe2048" 00:20:21.418 } 00:20:21.418 } 00:20:21.418 ]' 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.418 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.675 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.675 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.675 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.675 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.675 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.931 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.862 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.120 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.377 00:20:23.377 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.377 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.377 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.635 { 00:20:23.635 "cntlid": 61, 00:20:23.635 "qid": 0, 00:20:23.635 "state": "enabled", 00:20:23.635 "listen_address": { 00:20:23.635 "trtype": "TCP", 00:20:23.635 "adrfam": "IPv4", 00:20:23.635 "traddr": "10.0.0.2", 00:20:23.635 "trsvcid": "4420" 00:20:23.635 }, 00:20:23.635 "peer_address": { 00:20:23.635 "trtype": "TCP", 00:20:23.635 "adrfam": "IPv4", 00:20:23.635 "traddr": "10.0.0.1", 00:20:23.635 "trsvcid": "35918" 00:20:23.635 }, 00:20:23.635 "auth": { 00:20:23.635 "state": "completed", 00:20:23.635 "digest": "sha384", 00:20:23.635 "dhgroup": "ffdhe2048" 00:20:23.635 } 00:20:23.635 } 00:20:23.635 ]' 00:20:23.635 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.635 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.635 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.892 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.892 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.892 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.892 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.892 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.150 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.082 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.339 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:25.339 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.339 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.339 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.339 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.339 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.340 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.340 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.340 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.340 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.340 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.340 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.597 00:20:25.597 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.597 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.597 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.855 { 00:20:25.855 "cntlid": 63, 00:20:25.855 "qid": 0, 00:20:25.855 "state": "enabled", 00:20:25.855 "listen_address": { 00:20:25.855 "trtype": "TCP", 00:20:25.855 "adrfam": "IPv4", 00:20:25.855 "traddr": "10.0.0.2", 00:20:25.855 "trsvcid": "4420" 00:20:25.855 }, 00:20:25.855 "peer_address": { 00:20:25.855 "trtype": "TCP", 00:20:25.855 "adrfam": "IPv4", 00:20:25.855 "traddr": "10.0.0.1", 00:20:25.855 "trsvcid": "35938" 00:20:25.855 }, 00:20:25.855 "auth": { 00:20:25.855 "state": "completed", 00:20:25.855 "digest": "sha384", 00:20:25.855 "dhgroup": "ffdhe2048" 00:20:25.855 } 00:20:25.855 } 00:20:25.855 ]' 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.855 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.113 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.113 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.113 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.113 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.113 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.371 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.303 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.561 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.818 00:20:27.818 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.819 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.819 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.076 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.076 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.076 19:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.076 19:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.076 19:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.076 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.076 { 00:20:28.077 "cntlid": 65, 00:20:28.077 "qid": 0, 00:20:28.077 "state": "enabled", 00:20:28.077 "listen_address": { 00:20:28.077 "trtype": "TCP", 00:20:28.077 "adrfam": "IPv4", 00:20:28.077 "traddr": "10.0.0.2", 00:20:28.077 "trsvcid": "4420" 00:20:28.077 }, 00:20:28.077 "peer_address": { 00:20:28.077 "trtype": "TCP", 00:20:28.077 "adrfam": "IPv4", 00:20:28.077 "traddr": "10.0.0.1", 00:20:28.077 "trsvcid": "35966" 00:20:28.077 }, 00:20:28.077 "auth": { 00:20:28.077 "state": "completed", 00:20:28.077 "digest": "sha384", 00:20:28.077 "dhgroup": "ffdhe3072" 00:20:28.077 } 00:20:28.077 } 00:20:28.077 ]' 00:20:28.077 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.077 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.077 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.335 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.335 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.335 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.335 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.335 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.592 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.526 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.784 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.041 00:20:30.041 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.042 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.042 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.299 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.299 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.299 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.299 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.299 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.299 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.299 { 00:20:30.299 "cntlid": 67, 00:20:30.299 "qid": 0, 00:20:30.299 "state": "enabled", 00:20:30.299 "listen_address": { 00:20:30.299 "trtype": "TCP", 00:20:30.299 "adrfam": "IPv4", 00:20:30.299 "traddr": "10.0.0.2", 00:20:30.299 "trsvcid": "4420" 00:20:30.299 }, 00:20:30.299 "peer_address": { 00:20:30.299 "trtype": "TCP", 00:20:30.299 "adrfam": "IPv4", 00:20:30.299 "traddr": "10.0.0.1", 00:20:30.299 "trsvcid": "39264" 00:20:30.299 }, 00:20:30.300 "auth": { 00:20:30.300 "state": "completed", 00:20:30.300 "digest": "sha384", 00:20:30.300 "dhgroup": "ffdhe3072" 00:20:30.300 } 00:20:30.300 } 00:20:30.300 ]' 00:20:30.300 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.557 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.816 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:20:31.748 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.749 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.035 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.601 00:20:32.601 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.601 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.601 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.601 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.601 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.601 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.601 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.601 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.601 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.601 { 00:20:32.601 "cntlid": 69, 00:20:32.601 "qid": 0, 00:20:32.601 "state": "enabled", 00:20:32.601 "listen_address": { 00:20:32.601 "trtype": "TCP", 00:20:32.601 "adrfam": "IPv4", 00:20:32.601 "traddr": "10.0.0.2", 00:20:32.601 "trsvcid": "4420" 00:20:32.601 }, 00:20:32.601 "peer_address": { 00:20:32.601 "trtype": "TCP", 00:20:32.601 "adrfam": "IPv4", 00:20:32.601 "traddr": "10.0.0.1", 00:20:32.601 "trsvcid": "39272" 00:20:32.601 }, 00:20:32.601 "auth": { 00:20:32.601 "state": "completed", 00:20:32.601 "digest": "sha384", 00:20:32.601 "dhgroup": "ffdhe3072" 00:20:32.601 } 00:20:32.601 } 00:20:32.602 ]' 00:20:32.602 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.859 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.117 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.050 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.614 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.615 19:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.872 00:20:34.872 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.872 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.872 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.129 { 00:20:35.129 "cntlid": 71, 00:20:35.129 "qid": 0, 00:20:35.129 "state": "enabled", 00:20:35.129 "listen_address": { 00:20:35.129 "trtype": "TCP", 00:20:35.129 "adrfam": "IPv4", 00:20:35.129 "traddr": "10.0.0.2", 00:20:35.129 "trsvcid": "4420" 00:20:35.129 }, 00:20:35.129 "peer_address": { 00:20:35.129 "trtype": "TCP", 00:20:35.129 "adrfam": "IPv4", 00:20:35.129 "traddr": "10.0.0.1", 00:20:35.129 "trsvcid": "39304" 00:20:35.129 }, 00:20:35.129 "auth": { 00:20:35.129 "state": "completed", 00:20:35.129 "digest": "sha384", 00:20:35.129 "dhgroup": "ffdhe3072" 00:20:35.129 } 00:20:35.129 } 00:20:35.129 ]' 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.129 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.695 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.628 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.628 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.194 00:20:37.194 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.194 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.194 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.452 { 00:20:37.452 "cntlid": 73, 00:20:37.452 "qid": 0, 00:20:37.452 "state": "enabled", 00:20:37.452 "listen_address": { 00:20:37.452 "trtype": "TCP", 00:20:37.452 "adrfam": "IPv4", 00:20:37.452 "traddr": "10.0.0.2", 00:20:37.452 "trsvcid": "4420" 00:20:37.452 }, 00:20:37.452 "peer_address": { 00:20:37.452 "trtype": "TCP", 00:20:37.452 "adrfam": "IPv4", 00:20:37.452 "traddr": "10.0.0.1", 00:20:37.452 "trsvcid": "39336" 00:20:37.452 }, 00:20:37.452 "auth": { 00:20:37.452 "state": "completed", 00:20:37.452 "digest": "sha384", 00:20:37.452 "dhgroup": "ffdhe4096" 00:20:37.452 } 00:20:37.452 } 00:20:37.452 ]' 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.452 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.018 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.950 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.951 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.951 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.951 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.516 00:20:39.516 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.516 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.516 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.773 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.773 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.773 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.774 { 00:20:39.774 "cntlid": 75, 00:20:39.774 "qid": 0, 00:20:39.774 "state": "enabled", 00:20:39.774 "listen_address": { 00:20:39.774 "trtype": "TCP", 00:20:39.774 "adrfam": "IPv4", 00:20:39.774 "traddr": "10.0.0.2", 00:20:39.774 "trsvcid": "4420" 00:20:39.774 }, 00:20:39.774 "peer_address": { 00:20:39.774 "trtype": "TCP", 00:20:39.774 "adrfam": "IPv4", 00:20:39.774 "traddr": "10.0.0.1", 00:20:39.774 "trsvcid": "39364" 00:20:39.774 }, 00:20:39.774 "auth": { 00:20:39.774 "state": "completed", 00:20:39.774 "digest": "sha384", 00:20:39.774 "dhgroup": "ffdhe4096" 00:20:39.774 } 00:20:39.774 } 00:20:39.774 ]' 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.774 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.031 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:20:40.964 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.964 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.964 19:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.964 19:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.221 19:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.221 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.221 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.221 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.479 19:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.737 00:20:41.737 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.737 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.737 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.994 { 00:20:41.994 "cntlid": 77, 00:20:41.994 "qid": 0, 00:20:41.994 "state": "enabled", 00:20:41.994 "listen_address": { 00:20:41.994 "trtype": "TCP", 00:20:41.994 "adrfam": "IPv4", 00:20:41.994 "traddr": "10.0.0.2", 00:20:41.994 "trsvcid": "4420" 00:20:41.994 }, 00:20:41.994 "peer_address": { 00:20:41.994 "trtype": "TCP", 00:20:41.994 "adrfam": "IPv4", 00:20:41.994 "traddr": "10.0.0.1", 00:20:41.994 "trsvcid": "47546" 00:20:41.994 }, 00:20:41.994 "auth": { 00:20:41.994 "state": "completed", 00:20:41.994 "digest": "sha384", 00:20:41.994 "dhgroup": "ffdhe4096" 00:20:41.994 } 00:20:41.994 } 00:20:41.994 ]' 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.994 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.558 19:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.490 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.747 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.748 19:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.005 00:20:44.005 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.005 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.005 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.261 { 00:20:44.261 "cntlid": 79, 00:20:44.261 "qid": 0, 00:20:44.261 "state": "enabled", 00:20:44.261 "listen_address": { 00:20:44.261 "trtype": "TCP", 00:20:44.261 "adrfam": "IPv4", 00:20:44.261 "traddr": "10.0.0.2", 00:20:44.261 "trsvcid": "4420" 00:20:44.261 }, 00:20:44.261 "peer_address": { 00:20:44.261 "trtype": "TCP", 00:20:44.261 "adrfam": "IPv4", 00:20:44.261 "traddr": "10.0.0.1", 00:20:44.261 "trsvcid": "47586" 00:20:44.261 }, 00:20:44.261 "auth": { 00:20:44.261 "state": "completed", 00:20:44.261 "digest": "sha384", 00:20:44.261 "dhgroup": "ffdhe4096" 00:20:44.261 } 00:20:44.261 } 00:20:44.261 ]' 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.261 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.517 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.517 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.517 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.774 19:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.706 19:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.963 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:45.963 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.963 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.963 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.963 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.963 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.964 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.964 19:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.964 19:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.964 19:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.964 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.964 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.528 00:20:46.528 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.528 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.528 19:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.785 { 00:20:46.785 "cntlid": 81, 00:20:46.785 "qid": 0, 00:20:46.785 "state": "enabled", 00:20:46.785 "listen_address": { 00:20:46.785 "trtype": "TCP", 00:20:46.785 "adrfam": "IPv4", 00:20:46.785 "traddr": "10.0.0.2", 00:20:46.785 "trsvcid": "4420" 00:20:46.785 }, 00:20:46.785 "peer_address": { 00:20:46.785 "trtype": "TCP", 00:20:46.785 "adrfam": "IPv4", 00:20:46.785 "traddr": "10.0.0.1", 00:20:46.785 "trsvcid": "47626" 00:20:46.785 }, 00:20:46.785 "auth": { 00:20:46.785 "state": "completed", 00:20:46.785 "digest": "sha384", 00:20:46.785 "dhgroup": "ffdhe6144" 00:20:46.785 } 00:20:46.785 } 00:20:46.785 ]' 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.785 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.042 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.042 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.043 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.043 19:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.413 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.414 19:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.978 00:20:48.979 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.979 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.979 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.236 { 00:20:49.236 "cntlid": 83, 00:20:49.236 "qid": 0, 00:20:49.236 "state": "enabled", 00:20:49.236 "listen_address": { 00:20:49.236 "trtype": "TCP", 00:20:49.236 "adrfam": "IPv4", 00:20:49.236 "traddr": "10.0.0.2", 00:20:49.236 "trsvcid": "4420" 00:20:49.236 }, 00:20:49.236 "peer_address": { 00:20:49.236 "trtype": "TCP", 00:20:49.236 "adrfam": "IPv4", 00:20:49.236 "traddr": "10.0.0.1", 00:20:49.236 "trsvcid": "47662" 00:20:49.236 }, 00:20:49.236 "auth": { 00:20:49.236 "state": "completed", 00:20:49.236 "digest": "sha384", 00:20:49.236 "dhgroup": "ffdhe6144" 00:20:49.236 } 00:20:49.236 } 00:20:49.236 ]' 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.236 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.237 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.237 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.237 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.237 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.237 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.495 19:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.461 19:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.720 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.286 00:20:51.286 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.286 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.286 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.545 { 00:20:51.545 "cntlid": 85, 00:20:51.545 "qid": 0, 00:20:51.545 "state": "enabled", 00:20:51.545 "listen_address": { 00:20:51.545 "trtype": "TCP", 00:20:51.545 "adrfam": "IPv4", 00:20:51.545 "traddr": "10.0.0.2", 00:20:51.545 "trsvcid": "4420" 00:20:51.545 }, 00:20:51.545 "peer_address": { 00:20:51.545 "trtype": "TCP", 00:20:51.545 "adrfam": "IPv4", 00:20:51.545 "traddr": "10.0.0.1", 00:20:51.545 "trsvcid": "32776" 00:20:51.545 }, 00:20:51.545 "auth": { 00:20:51.545 "state": "completed", 00:20:51.545 "digest": "sha384", 00:20:51.545 "dhgroup": "ffdhe6144" 00:20:51.545 } 00:20:51.545 } 00:20:51.545 ]' 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.545 19:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.803 19:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.803 19:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.803 19:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.803 19:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.803 19:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.060 19:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.993 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.251 19:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.815 00:20:53.815 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.815 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.815 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.073 { 00:20:54.073 "cntlid": 87, 00:20:54.073 "qid": 0, 00:20:54.073 "state": "enabled", 00:20:54.073 "listen_address": { 00:20:54.073 "trtype": "TCP", 00:20:54.073 "adrfam": "IPv4", 00:20:54.073 "traddr": "10.0.0.2", 00:20:54.073 "trsvcid": "4420" 00:20:54.073 }, 00:20:54.073 "peer_address": { 00:20:54.073 "trtype": "TCP", 00:20:54.073 "adrfam": "IPv4", 00:20:54.073 "traddr": "10.0.0.1", 00:20:54.073 "trsvcid": "32792" 00:20:54.073 }, 00:20:54.073 "auth": { 00:20:54.073 "state": "completed", 00:20:54.073 "digest": "sha384", 00:20:54.073 "dhgroup": "ffdhe6144" 00:20:54.073 } 00:20:54.073 } 00:20:54.073 ]' 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.073 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.331 19:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.262 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.520 19:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.453 00:20:56.453 19:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.453 19:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.453 19:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.711 { 00:20:56.711 "cntlid": 89, 00:20:56.711 "qid": 0, 00:20:56.711 "state": "enabled", 00:20:56.711 "listen_address": { 00:20:56.711 "trtype": "TCP", 00:20:56.711 "adrfam": "IPv4", 00:20:56.711 "traddr": "10.0.0.2", 00:20:56.711 "trsvcid": "4420" 00:20:56.711 }, 00:20:56.711 "peer_address": { 00:20:56.711 "trtype": "TCP", 00:20:56.711 "adrfam": "IPv4", 00:20:56.711 "traddr": "10.0.0.1", 00:20:56.711 "trsvcid": "32814" 00:20:56.711 }, 00:20:56.711 "auth": { 00:20:56.711 "state": "completed", 00:20:56.711 "digest": "sha384", 00:20:56.711 "dhgroup": "ffdhe8192" 00:20:56.711 } 00:20:56.711 } 00:20:56.711 ]' 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.711 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.969 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.969 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.969 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.969 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.969 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.227 19:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.160 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.418 19:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.351 00:20:59.351 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.351 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.351 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.609 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.609 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.610 { 00:20:59.610 "cntlid": 91, 00:20:59.610 "qid": 0, 00:20:59.610 "state": "enabled", 00:20:59.610 "listen_address": { 00:20:59.610 "trtype": "TCP", 00:20:59.610 "adrfam": "IPv4", 00:20:59.610 "traddr": "10.0.0.2", 00:20:59.610 "trsvcid": "4420" 00:20:59.610 }, 00:20:59.610 "peer_address": { 00:20:59.610 "trtype": "TCP", 00:20:59.610 "adrfam": "IPv4", 00:20:59.610 "traddr": "10.0.0.1", 00:20:59.610 "trsvcid": "32834" 00:20:59.610 }, 00:20:59.610 "auth": { 00:20:59.610 "state": "completed", 00:20:59.610 "digest": "sha384", 00:20:59.610 "dhgroup": "ffdhe8192" 00:20:59.610 } 00:20:59.610 } 00:20:59.610 ]' 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.610 19:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.868 19:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.801 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.060 19:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.993 00:21:01.993 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.993 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.993 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.251 { 00:21:02.251 "cntlid": 93, 00:21:02.251 "qid": 0, 00:21:02.251 "state": "enabled", 00:21:02.251 "listen_address": { 00:21:02.251 "trtype": "TCP", 00:21:02.251 "adrfam": "IPv4", 00:21:02.251 "traddr": "10.0.0.2", 00:21:02.251 "trsvcid": "4420" 00:21:02.251 }, 00:21:02.251 "peer_address": { 00:21:02.251 "trtype": "TCP", 00:21:02.251 "adrfam": "IPv4", 00:21:02.251 "traddr": "10.0.0.1", 00:21:02.251 "trsvcid": "56492" 00:21:02.251 }, 00:21:02.251 "auth": { 00:21:02.251 "state": "completed", 00:21:02.251 "digest": "sha384", 00:21:02.251 "dhgroup": "ffdhe8192" 00:21:02.251 } 00:21:02.251 } 00:21:02.251 ]' 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.251 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.509 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.509 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.509 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.766 19:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.699 19:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.957 19:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.889 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.889 { 00:21:04.889 "cntlid": 95, 00:21:04.889 "qid": 0, 00:21:04.889 "state": "enabled", 00:21:04.889 "listen_address": { 00:21:04.889 "trtype": "TCP", 00:21:04.889 "adrfam": "IPv4", 00:21:04.889 "traddr": "10.0.0.2", 00:21:04.889 "trsvcid": "4420" 00:21:04.889 }, 00:21:04.889 "peer_address": { 00:21:04.889 "trtype": "TCP", 00:21:04.889 "adrfam": "IPv4", 00:21:04.889 "traddr": "10.0.0.1", 00:21:04.889 "trsvcid": "56512" 00:21:04.889 }, 00:21:04.889 "auth": { 00:21:04.889 "state": "completed", 00:21:04.889 "digest": "sha384", 00:21:04.889 "dhgroup": "ffdhe8192" 00:21:04.889 } 00:21:04.889 } 00:21:04.889 ]' 00:21:04.889 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.147 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.404 19:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.337 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.595 19:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.852 00:21:06.852 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.852 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.852 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.110 { 00:21:07.110 "cntlid": 97, 00:21:07.110 "qid": 0, 00:21:07.110 "state": "enabled", 00:21:07.110 "listen_address": { 00:21:07.110 "trtype": "TCP", 00:21:07.110 "adrfam": "IPv4", 00:21:07.110 "traddr": "10.0.0.2", 00:21:07.110 "trsvcid": "4420" 00:21:07.110 }, 00:21:07.110 "peer_address": { 00:21:07.110 "trtype": "TCP", 00:21:07.110 "adrfam": "IPv4", 00:21:07.110 "traddr": "10.0.0.1", 00:21:07.110 "trsvcid": "56532" 00:21:07.110 }, 00:21:07.110 "auth": { 00:21:07.110 "state": "completed", 00:21:07.110 "digest": "sha512", 00:21:07.110 "dhgroup": "null" 00:21:07.110 } 00:21:07.110 } 00:21:07.110 ]' 00:21:07.110 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.367 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.625 19:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.618 19:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.876 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.134 00:21:09.134 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.134 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.134 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.392 { 00:21:09.392 "cntlid": 99, 00:21:09.392 "qid": 0, 00:21:09.392 "state": "enabled", 00:21:09.392 "listen_address": { 00:21:09.392 "trtype": "TCP", 00:21:09.392 "adrfam": "IPv4", 00:21:09.392 "traddr": "10.0.0.2", 00:21:09.392 "trsvcid": "4420" 00:21:09.392 }, 00:21:09.392 "peer_address": { 00:21:09.392 "trtype": "TCP", 00:21:09.392 "adrfam": "IPv4", 00:21:09.392 "traddr": "10.0.0.1", 00:21:09.392 "trsvcid": "56566" 00:21:09.392 }, 00:21:09.392 "auth": { 00:21:09.392 "state": "completed", 00:21:09.392 "digest": "sha512", 00:21:09.392 "dhgroup": "null" 00:21:09.392 } 00:21:09.392 } 00:21:09.392 ]' 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:09.392 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.648 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.648 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.648 19:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.905 19:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.838 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.095 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.096 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.096 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.096 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.096 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.096 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.353 00:21:11.353 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.353 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.353 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.610 { 00:21:11.610 "cntlid": 101, 00:21:11.610 "qid": 0, 00:21:11.610 "state": "enabled", 00:21:11.610 "listen_address": { 00:21:11.610 "trtype": "TCP", 00:21:11.610 "adrfam": "IPv4", 00:21:11.610 "traddr": "10.0.0.2", 00:21:11.610 "trsvcid": "4420" 00:21:11.610 }, 00:21:11.610 "peer_address": { 00:21:11.610 "trtype": "TCP", 00:21:11.610 "adrfam": "IPv4", 00:21:11.610 "traddr": "10.0.0.1", 00:21:11.610 "trsvcid": "47976" 00:21:11.610 }, 00:21:11.610 "auth": { 00:21:11.610 "state": "completed", 00:21:11.610 "digest": "sha512", 00:21:11.610 "dhgroup": "null" 00:21:11.610 } 00:21:11.610 } 00:21:11.610 ]' 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:11.610 19:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.610 19:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.610 19:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.610 19:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.867 19:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.238 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.495 00:21:13.495 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.495 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.495 19:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.752 { 00:21:13.752 "cntlid": 103, 00:21:13.752 "qid": 0, 00:21:13.752 "state": "enabled", 00:21:13.752 "listen_address": { 00:21:13.752 "trtype": "TCP", 00:21:13.752 "adrfam": "IPv4", 00:21:13.752 "traddr": "10.0.0.2", 00:21:13.752 "trsvcid": "4420" 00:21:13.752 }, 00:21:13.752 "peer_address": { 00:21:13.752 "trtype": "TCP", 00:21:13.752 "adrfam": "IPv4", 00:21:13.752 "traddr": "10.0.0.1", 00:21:13.752 "trsvcid": "47998" 00:21:13.752 }, 00:21:13.752 "auth": { 00:21:13.752 "state": "completed", 00:21:13.752 "digest": "sha512", 00:21:13.752 "dhgroup": "null" 00:21:13.752 } 00:21:13.752 } 00:21:13.752 ]' 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.752 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.010 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:14.010 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.010 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.010 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.010 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.266 19:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:21:15.195 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.195 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.195 19:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.195 19:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.195 19:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.196 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.196 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.196 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.196 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.453 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:15.453 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.453 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.454 19:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.711 00:21:15.711 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.711 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.711 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.970 { 00:21:15.970 "cntlid": 105, 00:21:15.970 "qid": 0, 00:21:15.970 "state": "enabled", 00:21:15.970 "listen_address": { 00:21:15.970 "trtype": "TCP", 00:21:15.970 "adrfam": "IPv4", 00:21:15.970 "traddr": "10.0.0.2", 00:21:15.970 "trsvcid": "4420" 00:21:15.970 }, 00:21:15.970 "peer_address": { 00:21:15.970 "trtype": "TCP", 00:21:15.970 "adrfam": "IPv4", 00:21:15.970 "traddr": "10.0.0.1", 00:21:15.970 "trsvcid": "48020" 00:21:15.970 }, 00:21:15.970 "auth": { 00:21:15.970 "state": "completed", 00:21:15.970 "digest": "sha512", 00:21:15.970 "dhgroup": "ffdhe2048" 00:21:15.970 } 00:21:15.970 } 00:21:15.970 ]' 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.970 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.227 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.227 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.227 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.486 19:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.418 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.676 19:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.933 00:21:17.933 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.933 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.933 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.191 { 00:21:18.191 "cntlid": 107, 00:21:18.191 "qid": 0, 00:21:18.191 "state": "enabled", 00:21:18.191 "listen_address": { 00:21:18.191 "trtype": "TCP", 00:21:18.191 "adrfam": "IPv4", 00:21:18.191 "traddr": "10.0.0.2", 00:21:18.191 "trsvcid": "4420" 00:21:18.191 }, 00:21:18.191 "peer_address": { 00:21:18.191 "trtype": "TCP", 00:21:18.191 "adrfam": "IPv4", 00:21:18.191 "traddr": "10.0.0.1", 00:21:18.191 "trsvcid": "48048" 00:21:18.191 }, 00:21:18.191 "auth": { 00:21:18.191 "state": "completed", 00:21:18.191 "digest": "sha512", 00:21:18.191 "dhgroup": "ffdhe2048" 00:21:18.191 } 00:21:18.191 } 00:21:18.191 ]' 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.191 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.448 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.448 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.448 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.706 19:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.640 19:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.898 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.156 00:21:20.156 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.156 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.156 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.413 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.413 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.413 19:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.413 19:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.413 19:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.414 { 00:21:20.414 "cntlid": 109, 00:21:20.414 "qid": 0, 00:21:20.414 "state": "enabled", 00:21:20.414 "listen_address": { 00:21:20.414 "trtype": "TCP", 00:21:20.414 "adrfam": "IPv4", 00:21:20.414 "traddr": "10.0.0.2", 00:21:20.414 "trsvcid": "4420" 00:21:20.414 }, 00:21:20.414 "peer_address": { 00:21:20.414 "trtype": "TCP", 00:21:20.414 "adrfam": "IPv4", 00:21:20.414 "traddr": "10.0.0.1", 00:21:20.414 "trsvcid": "53922" 00:21:20.414 }, 00:21:20.414 "auth": { 00:21:20.414 "state": "completed", 00:21:20.414 "digest": "sha512", 00:21:20.414 "dhgroup": "ffdhe2048" 00:21:20.414 } 00:21:20.414 } 00:21:20.414 ]' 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.414 19:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.672 19:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.605 19:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.862 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:21.862 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.862 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.862 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:21.862 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.862 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.863 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.863 19:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.863 19:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.863 19:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.863 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.863 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.429 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.429 { 00:21:22.429 "cntlid": 111, 00:21:22.429 "qid": 0, 00:21:22.429 "state": "enabled", 00:21:22.429 "listen_address": { 00:21:22.429 "trtype": "TCP", 00:21:22.429 "adrfam": "IPv4", 00:21:22.429 "traddr": "10.0.0.2", 00:21:22.429 "trsvcid": "4420" 00:21:22.429 }, 00:21:22.429 "peer_address": { 00:21:22.429 "trtype": "TCP", 00:21:22.429 "adrfam": "IPv4", 00:21:22.429 "traddr": "10.0.0.1", 00:21:22.429 "trsvcid": "53936" 00:21:22.429 }, 00:21:22.429 "auth": { 00:21:22.429 "state": "completed", 00:21:22.429 "digest": "sha512", 00:21:22.429 "dhgroup": "ffdhe2048" 00:21:22.429 } 00:21:22.429 } 00:21:22.429 ]' 00:21:22.429 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.686 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.686 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.686 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.686 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.687 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.687 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.687 19:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.944 19:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.874 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.131 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.694 00:21:24.694 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.694 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.694 19:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.694 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.694 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.694 19:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.694 19:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.952 { 00:21:24.952 "cntlid": 113, 00:21:24.952 "qid": 0, 00:21:24.952 "state": "enabled", 00:21:24.952 "listen_address": { 00:21:24.952 "trtype": "TCP", 00:21:24.952 "adrfam": "IPv4", 00:21:24.952 "traddr": "10.0.0.2", 00:21:24.952 "trsvcid": "4420" 00:21:24.952 }, 00:21:24.952 "peer_address": { 00:21:24.952 "trtype": "TCP", 00:21:24.952 "adrfam": "IPv4", 00:21:24.952 "traddr": "10.0.0.1", 00:21:24.952 "trsvcid": "53964" 00:21:24.952 }, 00:21:24.952 "auth": { 00:21:24.952 "state": "completed", 00:21:24.952 "digest": "sha512", 00:21:24.952 "dhgroup": "ffdhe3072" 00:21:24.952 } 00:21:24.952 } 00:21:24.952 ]' 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.952 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.210 19:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.187 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.445 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.446 19:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.703 00:21:26.703 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.703 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.703 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.961 { 00:21:26.961 "cntlid": 115, 00:21:26.961 "qid": 0, 00:21:26.961 "state": "enabled", 00:21:26.961 "listen_address": { 00:21:26.961 "trtype": "TCP", 00:21:26.961 "adrfam": "IPv4", 00:21:26.961 "traddr": "10.0.0.2", 00:21:26.961 "trsvcid": "4420" 00:21:26.961 }, 00:21:26.961 "peer_address": { 00:21:26.961 "trtype": "TCP", 00:21:26.961 "adrfam": "IPv4", 00:21:26.961 "traddr": "10.0.0.1", 00:21:26.961 "trsvcid": "53990" 00:21:26.961 }, 00:21:26.961 "auth": { 00:21:26.961 "state": "completed", 00:21:26.961 "digest": "sha512", 00:21:26.961 "dhgroup": "ffdhe3072" 00:21:26.961 } 00:21:26.961 } 00:21:26.961 ]' 00:21:26.961 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.219 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.477 19:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.414 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.672 19:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.930 00:21:28.930 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.930 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.930 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.496 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.496 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.496 19:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.496 19:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.496 19:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.496 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.496 { 00:21:29.496 "cntlid": 117, 00:21:29.496 "qid": 0, 00:21:29.496 "state": "enabled", 00:21:29.496 "listen_address": { 00:21:29.496 "trtype": "TCP", 00:21:29.496 "adrfam": "IPv4", 00:21:29.496 "traddr": "10.0.0.2", 00:21:29.496 "trsvcid": "4420" 00:21:29.496 }, 00:21:29.496 "peer_address": { 00:21:29.496 "trtype": "TCP", 00:21:29.497 "adrfam": "IPv4", 00:21:29.497 "traddr": "10.0.0.1", 00:21:29.497 "trsvcid": "54004" 00:21:29.497 }, 00:21:29.497 "auth": { 00:21:29.497 "state": "completed", 00:21:29.497 "digest": "sha512", 00:21:29.497 "dhgroup": "ffdhe3072" 00:21:29.497 } 00:21:29.497 } 00:21:29.497 ]' 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.497 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.755 19:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.692 19:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.950 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.208 00:21:31.208 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.208 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.208 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.466 { 00:21:31.466 "cntlid": 119, 00:21:31.466 "qid": 0, 00:21:31.466 "state": "enabled", 00:21:31.466 "listen_address": { 00:21:31.466 "trtype": "TCP", 00:21:31.466 "adrfam": "IPv4", 00:21:31.466 "traddr": "10.0.0.2", 00:21:31.466 "trsvcid": "4420" 00:21:31.466 }, 00:21:31.466 "peer_address": { 00:21:31.466 "trtype": "TCP", 00:21:31.466 "adrfam": "IPv4", 00:21:31.466 "traddr": "10.0.0.1", 00:21:31.466 "trsvcid": "41704" 00:21:31.466 }, 00:21:31.466 "auth": { 00:21:31.466 "state": "completed", 00:21:31.466 "digest": "sha512", 00:21:31.466 "dhgroup": "ffdhe3072" 00:21:31.466 } 00:21:31.466 } 00:21:31.466 ]' 00:21:31.466 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.724 19:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.981 19:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.918 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.176 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.742 00:21:33.742 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.742 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.742 19:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.742 { 00:21:33.742 "cntlid": 121, 00:21:33.742 "qid": 0, 00:21:33.742 "state": "enabled", 00:21:33.742 "listen_address": { 00:21:33.742 "trtype": "TCP", 00:21:33.742 "adrfam": "IPv4", 00:21:33.742 "traddr": "10.0.0.2", 00:21:33.742 "trsvcid": "4420" 00:21:33.742 }, 00:21:33.742 "peer_address": { 00:21:33.742 "trtype": "TCP", 00:21:33.742 "adrfam": "IPv4", 00:21:33.742 "traddr": "10.0.0.1", 00:21:33.742 "trsvcid": "41734" 00:21:33.742 }, 00:21:33.742 "auth": { 00:21:33.742 "state": "completed", 00:21:33.742 "digest": "sha512", 00:21:33.742 "dhgroup": "ffdhe4096" 00:21:33.742 } 00:21:33.742 } 00:21:33.742 ]' 00:21:33.742 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.000 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.257 19:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.188 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.445 19:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.702 00:21:35.702 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.702 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.702 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.960 { 00:21:35.960 "cntlid": 123, 00:21:35.960 "qid": 0, 00:21:35.960 "state": "enabled", 00:21:35.960 "listen_address": { 00:21:35.960 "trtype": "TCP", 00:21:35.960 "adrfam": "IPv4", 00:21:35.960 "traddr": "10.0.0.2", 00:21:35.960 "trsvcid": "4420" 00:21:35.960 }, 00:21:35.960 "peer_address": { 00:21:35.960 "trtype": "TCP", 00:21:35.960 "adrfam": "IPv4", 00:21:35.960 "traddr": "10.0.0.1", 00:21:35.960 "trsvcid": "41762" 00:21:35.960 }, 00:21:35.960 "auth": { 00:21:35.960 "state": "completed", 00:21:35.960 "digest": "sha512", 00:21:35.960 "dhgroup": "ffdhe4096" 00:21:35.960 } 00:21:35.960 } 00:21:35.960 ]' 00:21:35.960 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.217 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.476 19:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.409 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.666 19:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.923 00:21:37.923 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.923 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.923 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.180 { 00:21:38.180 "cntlid": 125, 00:21:38.180 "qid": 0, 00:21:38.180 "state": "enabled", 00:21:38.180 "listen_address": { 00:21:38.180 "trtype": "TCP", 00:21:38.180 "adrfam": "IPv4", 00:21:38.180 "traddr": "10.0.0.2", 00:21:38.180 "trsvcid": "4420" 00:21:38.180 }, 00:21:38.180 "peer_address": { 00:21:38.180 "trtype": "TCP", 00:21:38.180 "adrfam": "IPv4", 00:21:38.180 "traddr": "10.0.0.1", 00:21:38.180 "trsvcid": "41786" 00:21:38.180 }, 00:21:38.180 "auth": { 00:21:38.180 "state": "completed", 00:21:38.180 "digest": "sha512", 00:21:38.180 "dhgroup": "ffdhe4096" 00:21:38.180 } 00:21:38.180 } 00:21:38.180 ]' 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.180 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.437 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.437 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.437 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.437 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.437 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.694 19:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.629 19:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.887 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.456 00:21:40.456 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.456 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.456 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.715 { 00:21:40.715 "cntlid": 127, 00:21:40.715 "qid": 0, 00:21:40.715 "state": "enabled", 00:21:40.715 "listen_address": { 00:21:40.715 "trtype": "TCP", 00:21:40.715 "adrfam": "IPv4", 00:21:40.715 "traddr": "10.0.0.2", 00:21:40.715 "trsvcid": "4420" 00:21:40.715 }, 00:21:40.715 "peer_address": { 00:21:40.715 "trtype": "TCP", 00:21:40.715 "adrfam": "IPv4", 00:21:40.715 "traddr": "10.0.0.1", 00:21:40.715 "trsvcid": "57818" 00:21:40.715 }, 00:21:40.715 "auth": { 00:21:40.715 "state": "completed", 00:21:40.715 "digest": "sha512", 00:21:40.715 "dhgroup": "ffdhe4096" 00:21:40.715 } 00:21:40.715 } 00:21:40.715 ]' 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.715 19:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.715 19:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.715 19:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.715 19:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.972 19:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.908 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.166 19:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.732 00:21:42.992 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.992 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.992 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.283 { 00:21:43.283 "cntlid": 129, 00:21:43.283 "qid": 0, 00:21:43.283 "state": "enabled", 00:21:43.283 "listen_address": { 00:21:43.283 "trtype": "TCP", 00:21:43.283 "adrfam": "IPv4", 00:21:43.283 "traddr": "10.0.0.2", 00:21:43.283 "trsvcid": "4420" 00:21:43.283 }, 00:21:43.283 "peer_address": { 00:21:43.283 "trtype": "TCP", 00:21:43.283 "adrfam": "IPv4", 00:21:43.283 "traddr": "10.0.0.1", 00:21:43.283 "trsvcid": "57834" 00:21:43.283 }, 00:21:43.283 "auth": { 00:21:43.283 "state": "completed", 00:21:43.283 "digest": "sha512", 00:21:43.283 "dhgroup": "ffdhe6144" 00:21:43.283 } 00:21:43.283 } 00:21:43.283 ]' 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.283 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.540 19:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.472 19:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.730 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.296 00:21:45.296 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.296 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.296 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.553 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.553 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.553 19:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.554 { 00:21:45.554 "cntlid": 131, 00:21:45.554 "qid": 0, 00:21:45.554 "state": "enabled", 00:21:45.554 "listen_address": { 00:21:45.554 "trtype": "TCP", 00:21:45.554 "adrfam": "IPv4", 00:21:45.554 "traddr": "10.0.0.2", 00:21:45.554 "trsvcid": "4420" 00:21:45.554 }, 00:21:45.554 "peer_address": { 00:21:45.554 "trtype": "TCP", 00:21:45.554 "adrfam": "IPv4", 00:21:45.554 "traddr": "10.0.0.1", 00:21:45.554 "trsvcid": "57860" 00:21:45.554 }, 00:21:45.554 "auth": { 00:21:45.554 "state": "completed", 00:21:45.554 "digest": "sha512", 00:21:45.554 "dhgroup": "ffdhe6144" 00:21:45.554 } 00:21:45.554 } 00:21:45.554 ]' 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.554 19:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.813 19:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.749 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.008 19:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.574 00:21:47.574 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.574 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.574 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.833 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.833 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.833 19:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.833 19:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.091 { 00:21:48.091 "cntlid": 133, 00:21:48.091 "qid": 0, 00:21:48.091 "state": "enabled", 00:21:48.091 "listen_address": { 00:21:48.091 "trtype": "TCP", 00:21:48.091 "adrfam": "IPv4", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "trsvcid": "4420" 00:21:48.091 }, 00:21:48.091 "peer_address": { 00:21:48.091 "trtype": "TCP", 00:21:48.091 "adrfam": "IPv4", 00:21:48.091 "traddr": "10.0.0.1", 00:21:48.091 "trsvcid": "57874" 00:21:48.091 }, 00:21:48.091 "auth": { 00:21:48.091 "state": "completed", 00:21:48.091 "digest": "sha512", 00:21:48.091 "dhgroup": "ffdhe6144" 00:21:48.091 } 00:21:48.091 } 00:21:48.091 ]' 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.091 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.349 19:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.287 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.545 19:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.112 00:21:50.112 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.112 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.112 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.371 { 00:21:50.371 "cntlid": 135, 00:21:50.371 "qid": 0, 00:21:50.371 "state": "enabled", 00:21:50.371 "listen_address": { 00:21:50.371 "trtype": "TCP", 00:21:50.371 "adrfam": "IPv4", 00:21:50.371 "traddr": "10.0.0.2", 00:21:50.371 "trsvcid": "4420" 00:21:50.371 }, 00:21:50.371 "peer_address": { 00:21:50.371 "trtype": "TCP", 00:21:50.371 "adrfam": "IPv4", 00:21:50.371 "traddr": "10.0.0.1", 00:21:50.371 "trsvcid": "57904" 00:21:50.371 }, 00:21:50.371 "auth": { 00:21:50.371 "state": "completed", 00:21:50.371 "digest": "sha512", 00:21:50.371 "dhgroup": "ffdhe6144" 00:21:50.371 } 00:21:50.371 } 00:21:50.371 ]' 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.371 19:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.629 19:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:21:51.565 19:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.823 19:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.823 19:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.823 19:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.823 19:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.823 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.823 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.823 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.823 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.082 19:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.019 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.019 { 00:21:53.019 "cntlid": 137, 00:21:53.019 "qid": 0, 00:21:53.019 "state": "enabled", 00:21:53.019 "listen_address": { 00:21:53.019 "trtype": "TCP", 00:21:53.019 "adrfam": "IPv4", 00:21:53.019 "traddr": "10.0.0.2", 00:21:53.019 "trsvcid": "4420" 00:21:53.019 }, 00:21:53.019 "peer_address": { 00:21:53.019 "trtype": "TCP", 00:21:53.019 "adrfam": "IPv4", 00:21:53.019 "traddr": "10.0.0.1", 00:21:53.019 "trsvcid": "44518" 00:21:53.019 }, 00:21:53.019 "auth": { 00:21:53.019 "state": "completed", 00:21:53.019 "digest": "sha512", 00:21:53.019 "dhgroup": "ffdhe8192" 00:21:53.019 } 00:21:53.019 } 00:21:53.019 ]' 00:21:53.019 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.276 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.533 19:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.468 19:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.726 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.663 00:21:55.663 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.663 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.663 19:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.921 { 00:21:55.921 "cntlid": 139, 00:21:55.921 "qid": 0, 00:21:55.921 "state": "enabled", 00:21:55.921 "listen_address": { 00:21:55.921 "trtype": "TCP", 00:21:55.921 "adrfam": "IPv4", 00:21:55.921 "traddr": "10.0.0.2", 00:21:55.921 "trsvcid": "4420" 00:21:55.921 }, 00:21:55.921 "peer_address": { 00:21:55.921 "trtype": "TCP", 00:21:55.921 "adrfam": "IPv4", 00:21:55.921 "traddr": "10.0.0.1", 00:21:55.921 "trsvcid": "44544" 00:21:55.921 }, 00:21:55.921 "auth": { 00:21:55.921 "state": "completed", 00:21:55.921 "digest": "sha512", 00:21:55.921 "dhgroup": "ffdhe8192" 00:21:55.921 } 00:21:55.921 } 00:21:55.921 ]' 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.921 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.178 19:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWI3ZGVmNzE0NWI3MGU2NTBkZDcxODBjZDk4NjljZGRaVUkU: --dhchap-ctrl-secret DHHC-1:02:ZmExMmIzY2FjN2QyODU5YTZlZjI1YWY0YjBhM2YwNjE3MmI4NjYzZWY1ZDMyZWJlw1CcSA==: 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.110 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.368 19:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.304 00:21:58.304 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.304 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.304 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.562 { 00:21:58.562 "cntlid": 141, 00:21:58.562 "qid": 0, 00:21:58.562 "state": "enabled", 00:21:58.562 "listen_address": { 00:21:58.562 "trtype": "TCP", 00:21:58.562 "adrfam": "IPv4", 00:21:58.562 "traddr": "10.0.0.2", 00:21:58.562 "trsvcid": "4420" 00:21:58.562 }, 00:21:58.562 "peer_address": { 00:21:58.562 "trtype": "TCP", 00:21:58.562 "adrfam": "IPv4", 00:21:58.562 "traddr": "10.0.0.1", 00:21:58.562 "trsvcid": "44558" 00:21:58.562 }, 00:21:58.562 "auth": { 00:21:58.562 "state": "completed", 00:21:58.562 "digest": "sha512", 00:21:58.562 "dhgroup": "ffdhe8192" 00:21:58.562 } 00:21:58.562 } 00:21:58.562 ]' 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.562 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.820 19:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.820 19:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.820 19:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.820 19:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.078 19:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGM1Y2NhMjdiYmYzYmNkZjg4ZDg2YzU1NTM3NTNkY2NiY2Y2MjQwMGFiMTA3ODg2o5gE0Q==: --dhchap-ctrl-secret DHHC-1:01:NTVmNzU5ZTUwZWZlNTRkYTU5ZWJkM2ViNzdhMDY5YjnTsqzM: 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:00.015 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.273 19:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.240 00:22:01.240 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.240 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.240 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.504 { 00:22:01.504 "cntlid": 143, 00:22:01.504 "qid": 0, 00:22:01.504 "state": "enabled", 00:22:01.504 "listen_address": { 00:22:01.504 "trtype": "TCP", 00:22:01.504 "adrfam": "IPv4", 00:22:01.504 "traddr": "10.0.0.2", 00:22:01.504 "trsvcid": "4420" 00:22:01.504 }, 00:22:01.504 "peer_address": { 00:22:01.504 "trtype": "TCP", 00:22:01.504 "adrfam": "IPv4", 00:22:01.504 "traddr": "10.0.0.1", 00:22:01.504 "trsvcid": "40162" 00:22:01.504 }, 00:22:01.504 "auth": { 00:22:01.504 "state": "completed", 00:22:01.504 "digest": "sha512", 00:22:01.504 "dhgroup": "ffdhe8192" 00:22:01.504 } 00:22:01.504 } 00:22:01.504 ]' 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.504 19:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.761 19:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.699 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.957 19:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.891 00:22:03.891 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.892 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.892 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.149 { 00:22:04.149 "cntlid": 145, 00:22:04.149 "qid": 0, 00:22:04.149 "state": "enabled", 00:22:04.149 "listen_address": { 00:22:04.149 "trtype": "TCP", 00:22:04.149 "adrfam": "IPv4", 00:22:04.149 "traddr": "10.0.0.2", 00:22:04.149 "trsvcid": "4420" 00:22:04.149 }, 00:22:04.149 "peer_address": { 00:22:04.149 "trtype": "TCP", 00:22:04.149 "adrfam": "IPv4", 00:22:04.149 "traddr": "10.0.0.1", 00:22:04.149 "trsvcid": "40200" 00:22:04.149 }, 00:22:04.149 "auth": { 00:22:04.149 "state": "completed", 00:22:04.149 "digest": "sha512", 00:22:04.149 "dhgroup": "ffdhe8192" 00:22:04.149 } 00:22:04.149 } 00:22:04.149 ]' 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.149 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.405 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.406 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.406 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.406 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.406 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.664 19:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjM4ODUzMDY4YWFkM2U4NDk4MTkyMTM2MmVlYmZkYzE1MzFlOGY2MzljMjQ5NWEzTVOrfA==: --dhchap-ctrl-secret DHHC-1:03:YzA4ODlkY2Q5ODhkYWM1MTZmZmU0MDNjMTEwYjFjZDc5YzkxMzIxMTdjYThlMTAyNWY0OWZmYjNhZTUwOTQ1ON1IcnY=: 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.598 19:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:06.535 request: 00:22:06.535 { 00:22:06.535 "name": "nvme0", 00:22:06.535 "trtype": "tcp", 00:22:06.535 "traddr": "10.0.0.2", 00:22:06.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.535 "adrfam": "ipv4", 00:22:06.535 "trsvcid": "4420", 00:22:06.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.535 "dhchap_key": "key2", 00:22:06.535 "method": "bdev_nvme_attach_controller", 00:22:06.535 "req_id": 1 00:22:06.535 } 00:22:06.535 Got JSON-RPC error response 00:22:06.535 response: 00:22:06.535 { 00:22:06.535 "code": -5, 00:22:06.535 "message": "Input/output error" 00:22:06.535 } 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.535 19:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.470 request: 00:22:07.470 { 00:22:07.470 "name": "nvme0", 00:22:07.470 "trtype": "tcp", 00:22:07.470 "traddr": "10.0.0.2", 00:22:07.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.470 "adrfam": "ipv4", 00:22:07.470 "trsvcid": "4420", 00:22:07.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.470 "dhchap_key": "key1", 00:22:07.470 "dhchap_ctrlr_key": "ckey2", 00:22:07.470 "method": "bdev_nvme_attach_controller", 00:22:07.470 "req_id": 1 00:22:07.470 } 00:22:07.470 Got JSON-RPC error response 00:22:07.470 response: 00:22:07.470 { 00:22:07.470 "code": -5, 00:22:07.470 "message": "Input/output error" 00:22:07.470 } 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.470 19:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.407 request: 00:22:08.407 { 00:22:08.407 "name": "nvme0", 00:22:08.407 "trtype": "tcp", 00:22:08.407 "traddr": "10.0.0.2", 00:22:08.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.407 "adrfam": "ipv4", 00:22:08.407 "trsvcid": "4420", 00:22:08.407 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.407 "dhchap_key": "key1", 00:22:08.407 "dhchap_ctrlr_key": "ckey1", 00:22:08.407 "method": "bdev_nvme_attach_controller", 00:22:08.407 "req_id": 1 00:22:08.407 } 00:22:08.407 Got JSON-RPC error response 00:22:08.407 response: 00:22:08.407 { 00:22:08.407 "code": -5, 00:22:08.407 "message": "Input/output error" 00:22:08.407 } 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3984826 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3984826 ']' 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3984826 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3984826 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3984826' 00:22:08.407 killing process with pid 3984826 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3984826 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3984826 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4007235 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4007235 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4007235 ']' 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:08.407 19:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4007235 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4007235 ']' 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:08.665 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.925 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:08.925 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:08.925 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:08.925 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.925 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.184 19:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.122 00:22:10.122 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.122 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.122 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.380 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.380 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.380 19:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.380 19:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.380 19:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.380 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.380 { 00:22:10.380 "cntlid": 1, 00:22:10.380 "qid": 0, 00:22:10.380 "state": "enabled", 00:22:10.380 "listen_address": { 00:22:10.380 "trtype": "TCP", 00:22:10.380 "adrfam": "IPv4", 00:22:10.380 "traddr": "10.0.0.2", 00:22:10.380 "trsvcid": "4420" 00:22:10.380 }, 00:22:10.380 "peer_address": { 00:22:10.380 "trtype": "TCP", 00:22:10.380 "adrfam": "IPv4", 00:22:10.380 "traddr": "10.0.0.1", 00:22:10.380 "trsvcid": "40258" 00:22:10.380 }, 00:22:10.380 "auth": { 00:22:10.380 "state": "completed", 00:22:10.380 "digest": "sha512", 00:22:10.380 "dhgroup": "ffdhe8192" 00:22:10.380 } 00:22:10.380 } 00:22:10.381 ]' 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.381 19:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.639 19:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:N2U2YWVjZmUwMmUyZWZkMjYzMTQ3ZjhjNjQ3OGRkYmMyMjI4ZjExOTEyNDUxNmMwMDAyZDc4ZDU0NGRkNDU4Mytn58M=: 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:11.572 19:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.830 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.088 request: 00:22:12.088 { 00:22:12.088 "name": "nvme0", 00:22:12.088 "trtype": "tcp", 00:22:12.088 "traddr": "10.0.0.2", 00:22:12.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.088 "adrfam": "ipv4", 00:22:12.088 "trsvcid": "4420", 00:22:12.088 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.088 "dhchap_key": "key3", 00:22:12.088 "method": "bdev_nvme_attach_controller", 00:22:12.088 "req_id": 1 00:22:12.088 } 00:22:12.088 Got JSON-RPC error response 00:22:12.088 response: 00:22:12.088 { 00:22:12.088 "code": -5, 00:22:12.088 "message": "Input/output error" 00:22:12.088 } 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.088 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.347 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.605 request: 00:22:12.605 { 00:22:12.605 "name": "nvme0", 00:22:12.605 "trtype": "tcp", 00:22:12.605 "traddr": "10.0.0.2", 00:22:12.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.605 "adrfam": "ipv4", 00:22:12.605 "trsvcid": "4420", 00:22:12.605 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.605 "dhchap_key": "key3", 00:22:12.605 "method": "bdev_nvme_attach_controller", 00:22:12.605 "req_id": 1 00:22:12.605 } 00:22:12.605 Got JSON-RPC error response 00:22:12.605 response: 00:22:12.605 { 00:22:12.605 "code": -5, 00:22:12.605 "message": "Input/output error" 00:22:12.605 } 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.605 19:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.863 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:12.864 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.864 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:12.864 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.864 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.864 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.122 request: 00:22:13.122 { 00:22:13.122 "name": "nvme0", 00:22:13.122 "trtype": "tcp", 00:22:13.122 "traddr": "10.0.0.2", 00:22:13.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.122 "adrfam": "ipv4", 00:22:13.122 "trsvcid": "4420", 00:22:13.122 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.122 "dhchap_key": "key0", 00:22:13.122 "dhchap_ctrlr_key": "key1", 00:22:13.122 "method": "bdev_nvme_attach_controller", 00:22:13.122 "req_id": 1 00:22:13.122 } 00:22:13.122 Got JSON-RPC error response 00:22:13.122 response: 00:22:13.122 { 00:22:13.122 "code": -5, 00:22:13.122 "message": "Input/output error" 00:22:13.122 } 00:22:13.122 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:13.122 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.122 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.122 19:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.122 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.122 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.688 00:22:13.688 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:13.688 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:13.688 19:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.688 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.688 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.688 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3984960 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3984960 ']' 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3984960 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3984960 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3984960' 00:22:13.947 killing process with pid 3984960 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3984960 00:22:13.947 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3984960 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.514 rmmod nvme_tcp 00:22:14.514 rmmod nvme_fabrics 00:22:14.514 rmmod nvme_keyring 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4007235 ']' 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4007235 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 4007235 ']' 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 4007235 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007235 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007235' 00:22:14.514 killing process with pid 4007235 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 4007235 00:22:14.514 19:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 4007235 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.773 19:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.310 19:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.310 19:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MW9 /tmp/spdk.key-sha256.5VG /tmp/spdk.key-sha384.ntm /tmp/spdk.key-sha512.FQ5 /tmp/spdk.key-sha512.D8k /tmp/spdk.key-sha384.ECn /tmp/spdk.key-sha256.SwV '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:17.310 00:22:17.310 real 3m8.604s 00:22:17.310 user 7m18.747s 00:22:17.310 sys 0m24.867s 00:22:17.310 19:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:17.310 19:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 ************************************ 00:22:17.310 END TEST nvmf_auth_target 00:22:17.310 ************************************ 00:22:17.310 19:52:26 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:17.310 19:52:26 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:17.310 19:52:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:17.310 19:52:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:17.310 19:52:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 ************************************ 00:22:17.310 START TEST nvmf_bdevio_no_huge 00:22:17.310 ************************************ 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:17.310 * Looking for test storage... 00:22:17.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.310 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.311 19:52:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.687 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:18.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:18.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:18.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:18.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.688 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:18.949 00:22:18.949 --- 10.0.0.2 ping statistics --- 00:22:18.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.949 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:22:18.949 00:22:18.949 --- 10.0.0.1 ping statistics --- 00:22:18.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.949 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4009891 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4009891 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 4009891 ']' 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:18.949 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.949 [2024-07-25 19:52:28.316722] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:18.949 [2024-07-25 19:52:28.316789] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:19.210 [2024-07-25 19:52:28.384502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.210 [2024-07-25 19:52:28.470116] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.210 [2024-07-25 19:52:28.470175] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.210 [2024-07-25 19:52:28.470203] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.210 [2024-07-25 19:52:28.470215] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.210 [2024-07-25 19:52:28.470226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.210 [2024-07-25 19:52:28.470328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.210 [2024-07-25 19:52:28.470456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:19.210 [2024-07-25 19:52:28.470525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:19.210 [2024-07-25 19:52:28.470527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 [2024-07-25 19:52:28.583776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 Malloc0 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 [2024-07-25 19:52:28.621702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:19.210 { 00:22:19.210 "params": { 00:22:19.210 "name": "Nvme$subsystem", 00:22:19.210 "trtype": "$TEST_TRANSPORT", 00:22:19.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.210 "adrfam": "ipv4", 00:22:19.210 "trsvcid": "$NVMF_PORT", 00:22:19.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.210 "hdgst": ${hdgst:-false}, 00:22:19.210 "ddgst": ${ddgst:-false} 00:22:19.210 }, 00:22:19.210 "method": "bdev_nvme_attach_controller" 00:22:19.210 } 00:22:19.210 EOF 00:22:19.210 )") 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:19.210 19:52:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:19.210 "params": { 00:22:19.210 "name": "Nvme1", 00:22:19.210 "trtype": "tcp", 00:22:19.210 "traddr": "10.0.0.2", 00:22:19.210 "adrfam": "ipv4", 00:22:19.210 "trsvcid": "4420", 00:22:19.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.210 "hdgst": false, 00:22:19.210 "ddgst": false 00:22:19.210 }, 00:22:19.210 "method": "bdev_nvme_attach_controller" 00:22:19.210 }' 00:22:19.502 [2024-07-25 19:52:28.667516] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:19.502 [2024-07-25 19:52:28.667598] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4010034 ] 00:22:19.502 [2024-07-25 19:52:28.728576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:19.502 [2024-07-25 19:52:28.811248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.502 [2024-07-25 19:52:28.811298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.502 [2024-07-25 19:52:28.811302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.760 I/O targets: 00:22:19.760 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:19.760 00:22:19.760 00:22:19.760 CUnit - A unit testing framework for C - Version 2.1-3 00:22:19.760 http://cunit.sourceforge.net/ 00:22:19.760 00:22:19.760 00:22:19.760 Suite: bdevio tests on: Nvme1n1 00:22:19.760 Test: blockdev write read block ...passed 00:22:19.760 Test: blockdev write zeroes read block ...passed 00:22:19.760 Test: blockdev write zeroes read no split ...passed 00:22:20.019 Test: blockdev write zeroes read split ...passed 00:22:20.019 Test: blockdev write zeroes read split partial ...passed 00:22:20.019 Test: blockdev reset ...[2024-07-25 19:52:29.290507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.019 [2024-07-25 19:52:29.290619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a98a00 (9): Bad file descriptor 00:22:20.019 [2024-07-25 19:52:29.346501] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.019 passed 00:22:20.019 Test: blockdev write read 8 blocks ...passed 00:22:20.019 Test: blockdev write read size > 128k ...passed 00:22:20.019 Test: blockdev write read invalid size ...passed 00:22:20.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:20.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:20.019 Test: blockdev write read max offset ...passed 00:22:20.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:20.278 Test: blockdev writev readv 8 blocks ...passed 00:22:20.278 Test: blockdev writev readv 30 x 1block ...passed 00:22:20.278 Test: blockdev writev readv block ...passed 00:22:20.278 Test: blockdev writev readv size > 128k ...passed 00:22:20.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:20.278 Test: blockdev comparev and writev ...[2024-07-25 19:52:29.604465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.278 [2024-07-25 19:52:29.604500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:20.278 [2024-07-25 19:52:29.604525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.278 [2024-07-25 19:52:29.604542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:20.278 [2024-07-25 19:52:29.604892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.278 [2024-07-25 19:52:29.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:20.278 [2024-07-25 19:52:29.604939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.279 [2024-07-25 19:52:29.604956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.605290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.279 [2024-07-25 19:52:29.605314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.605336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.279 [2024-07-25 19:52:29.605352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.605681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.279 [2024-07-25 19:52:29.605705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.605726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.279 [2024-07-25 19:52:29.605742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:20.279 passed 00:22:20.279 Test: blockdev nvme passthru rw ...passed 00:22:20.279 Test: blockdev nvme passthru vendor specific ...[2024-07-25 19:52:29.688378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.279 [2024-07-25 19:52:29.688406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.688559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.279 [2024-07-25 19:52:29.688581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.688739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.279 [2024-07-25 19:52:29.688762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:20.279 [2024-07-25 19:52:29.688920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.279 [2024-07-25 19:52:29.688943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:20.279 passed 00:22:20.279 Test: blockdev nvme admin passthru ...passed 00:22:20.536 Test: blockdev copy ...passed 00:22:20.536 00:22:20.536 Run Summary: Type Total Ran Passed Failed Inactive 00:22:20.536 suites 1 1 n/a 0 0 00:22:20.536 tests 23 23 23 0 0 00:22:20.536 asserts 152 152 152 0 n/a 00:22:20.536 00:22:20.536 Elapsed time = 1.314 seconds 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.795 rmmod nvme_tcp 00:22:20.795 rmmod nvme_fabrics 00:22:20.795 rmmod nvme_keyring 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4009891 ']' 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4009891 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 4009891 ']' 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 4009891 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009891 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009891' 00:22:20.795 killing process with pid 4009891 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 4009891 00:22:20.795 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 4009891 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.361 19:52:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.271 19:52:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.271 00:22:23.271 real 0m6.415s 00:22:23.271 user 0m11.153s 00:22:23.271 sys 0m2.410s 00:22:23.271 19:52:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:23.271 19:52:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.271 ************************************ 00:22:23.271 END TEST nvmf_bdevio_no_huge 00:22:23.271 ************************************ 00:22:23.271 19:52:32 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:23.271 19:52:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:23.271 19:52:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:23.271 19:52:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.271 ************************************ 00:22:23.271 START TEST nvmf_tls 00:22:23.271 ************************************ 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:23.271 * Looking for test storage... 00:22:23.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.271 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.530 19:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:25.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:25.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:25.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:25.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.433 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:25.434 00:22:25.434 --- 10.0.0.2 ping statistics --- 00:22:25.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.434 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:22:25.434 00:22:25.434 --- 10.0.0.1 ping statistics --- 00:22:25.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.434 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4012110 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4012110 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4012110 ']' 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:25.434 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.434 [2024-07-25 19:52:34.743716] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:25.434 [2024-07-25 19:52:34.743798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.434 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.434 [2024-07-25 19:52:34.815138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.692 [2024-07-25 19:52:34.904935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.692 [2024-07-25 19:52:34.904993] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.692 [2024-07-25 19:52:34.905019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.692 [2024-07-25 19:52:34.905032] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.692 [2024-07-25 19:52:34.905044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.692 [2024-07-25 19:52:34.905090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:25.692 19:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:25.949 true 00:22:25.949 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:25.949 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:26.207 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:26.207 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:26.207 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:26.464 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.464 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:26.722 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:26.722 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:26.722 19:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:26.980 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.980 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:27.239 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:27.239 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:27.239 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.239 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:27.498 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:27.498 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:27.498 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:27.757 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.757 19:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:28.016 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:28.017 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:28.017 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:28.274 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.274 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.8yvY1aWdUm 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RqdtqldNgF 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.8yvY1aWdUm 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RqdtqldNgF 00:22:28.532 19:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:28.789 19:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:29.046 19:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.8yvY1aWdUm 00:22:29.046 19:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8yvY1aWdUm 00:22:29.046 19:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.303 [2024-07-25 19:52:38.724848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.561 19:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.818 19:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.076 [2024-07-25 19:52:39.322485] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.076 [2024-07-25 19:52:39.322737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.076 19:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.333 malloc0 00:22:30.333 19:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.590 19:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8yvY1aWdUm 00:22:30.849 [2024-07-25 19:52:40.091218] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:30.849 19:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8yvY1aWdUm 00:22:30.849 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.819 Initializing NVMe Controllers 00:22:40.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.819 Initialization complete. Launching workers. 00:22:40.819 ======================================================== 00:22:40.819 Latency(us) 00:22:40.819 Device Information : IOPS MiB/s Average min max 00:22:40.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7580.76 29.61 8445.20 1420.14 9498.57 00:22:40.819 ======================================================== 00:22:40.819 Total : 7580.76 29.61 8445.20 1420.14 9498.57 00:22:40.819 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8yvY1aWdUm 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8yvY1aWdUm' 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4014000 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4014000 /var/tmp/bdevperf.sock 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4014000 ']' 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:40.819 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.079 [2024-07-25 19:52:50.264832] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:41.079 [2024-07-25 19:52:50.264920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014000 ] 00:22:41.079 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.079 [2024-07-25 19:52:50.327114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.079 [2024-07-25 19:52:50.415334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.337 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:41.337 19:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:41.337 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8yvY1aWdUm 00:22:41.597 [2024-07-25 19:52:50.790424] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.597 [2024-07-25 19:52:50.790556] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:41.597 TLSTESTn1 00:22:41.597 19:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:41.597 Running I/O for 10 seconds... 00:22:51.628 00:22:51.628 Latency(us) 00:22:51.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.628 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.628 Verification LBA range: start 0x0 length 0x2000 00:22:51.628 TLSTESTn1 : 10.02 3574.87 13.96 0.00 0.00 35742.14 9175.04 32428.18 00:22:51.628 =================================================================================================================== 00:22:51.628 Total : 3574.87 13.96 0.00 0.00 35742.14 9175.04 32428.18 00:22:51.628 0 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4014000 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4014000 ']' 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4014000 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4014000 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4014000' 00:22:51.887 killing process with pid 4014000 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4014000 00:22:51.887 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.887 00:22:51.887 Latency(us) 00:22:51.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.887 =================================================================================================================== 00:22:51.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.887 [2024-07-25 19:53:01.098241] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4014000 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RqdtqldNgF 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RqdtqldNgF 00:22:51.887 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RqdtqldNgF 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.145 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RqdtqldNgF' 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4015201 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4015201 /var/tmp/bdevperf.sock 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4015201 ']' 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.146 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.146 [2024-07-25 19:53:01.361816] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:52.146 [2024-07-25 19:53:01.361891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015201 ] 00:22:52.146 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.146 [2024-07-25 19:53:01.421195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.146 [2024-07-25 19:53:01.505545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.403 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.403 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:52.403 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RqdtqldNgF 00:22:52.662 [2024-07-25 19:53:01.892095] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.662 [2024-07-25 19:53:01.892199] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:52.662 [2024-07-25 19:53:01.897964] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.662 [2024-07-25 19:53:01.897999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956ed0 (107): Transport endpoint is not connected 00:22:52.662 [2024-07-25 19:53:01.898942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956ed0 (9): Bad file descriptor 00:22:52.662 [2024-07-25 19:53:01.899941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.662 [2024-07-25 19:53:01.899962] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.662 [2024-07-25 19:53:01.899979] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.662 request: 00:22:52.662 { 00:22:52.662 "name": "TLSTEST", 00:22:52.662 "trtype": "tcp", 00:22:52.662 "traddr": "10.0.0.2", 00:22:52.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.662 "adrfam": "ipv4", 00:22:52.662 "trsvcid": "4420", 00:22:52.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.662 "psk": "/tmp/tmp.RqdtqldNgF", 00:22:52.662 "method": "bdev_nvme_attach_controller", 00:22:52.662 "req_id": 1 00:22:52.662 } 00:22:52.662 Got JSON-RPC error response 00:22:52.662 response: 00:22:52.662 { 00:22:52.662 "code": -5, 00:22:52.662 "message": "Input/output error" 00:22:52.662 } 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4015201 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4015201 ']' 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4015201 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4015201 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4015201' 00:22:52.662 killing process with pid 4015201 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4015201 00:22:52.662 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.662 00:22:52.662 Latency(us) 00:22:52.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.662 =================================================================================================================== 00:22:52.662 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.662 [2024-07-25 19:53:01.948758] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.662 19:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4015201 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8yvY1aWdUm 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8yvY1aWdUm 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8yvY1aWdUm 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8yvY1aWdUm' 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4015326 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4015326 /var/tmp/bdevperf.sock 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4015326 ']' 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.920 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.920 [2024-07-25 19:53:02.211073] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:52.920 [2024-07-25 19:53:02.211161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015326 ] 00:22:52.920 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.920 [2024-07-25 19:53:02.270674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.178 [2024-07-25 19:53:02.357330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.178 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.178 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.178 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.8yvY1aWdUm 00:22:53.437 [2024-07-25 19:53:02.708742] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.437 [2024-07-25 19:53:02.708886] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.437 [2024-07-25 19:53:02.719884] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.437 [2024-07-25 19:53:02.719934] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.437 [2024-07-25 19:53:02.719976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.437 [2024-07-25 19:53:02.720899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118ced0 (107): Transport endpoint is not connected 00:22:53.437 [2024-07-25 19:53:02.721890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118ced0 (9): Bad file descriptor 00:22:53.437 [2024-07-25 19:53:02.722888] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.437 [2024-07-25 19:53:02.722909] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.437 [2024-07-25 19:53:02.722926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.437 request: 00:22:53.437 { 00:22:53.437 "name": "TLSTEST", 00:22:53.437 "trtype": "tcp", 00:22:53.437 "traddr": "10.0.0.2", 00:22:53.437 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.437 "adrfam": "ipv4", 00:22:53.437 "trsvcid": "4420", 00:22:53.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.437 "psk": "/tmp/tmp.8yvY1aWdUm", 00:22:53.437 "method": "bdev_nvme_attach_controller", 00:22:53.437 "req_id": 1 00:22:53.437 } 00:22:53.437 Got JSON-RPC error response 00:22:53.437 response: 00:22:53.437 { 00:22:53.437 "code": -5, 00:22:53.437 "message": "Input/output error" 00:22:53.437 } 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4015326 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4015326 ']' 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4015326 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4015326 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4015326' 00:22:53.437 killing process with pid 4015326 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4015326 00:22:53.437 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.437 00:22:53.437 Latency(us) 00:22:53.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.437 =================================================================================================================== 00:22:53.437 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.437 [2024-07-25 19:53:02.773182] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.437 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4015326 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8yvY1aWdUm 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8yvY1aWdUm 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8yvY1aWdUm 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8yvY1aWdUm' 00:22:53.695 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4015466 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4015466 /var/tmp/bdevperf.sock 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4015466 ']' 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.696 19:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.696 [2024-07-25 19:53:03.040469] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:53.696 [2024-07-25 19:53:03.040560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015466 ] 00:22:53.696 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.696 [2024-07-25 19:53:03.098669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.952 [2024-07-25 19:53:03.181607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.952 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.952 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.952 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8yvY1aWdUm 00:22:54.211 [2024-07-25 19:53:03.508221] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.211 [2024-07-25 19:53:03.508359] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.211 [2024-07-25 19:53:03.513725] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.211 [2024-07-25 19:53:03.513759] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.211 [2024-07-25 19:53:03.513811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.211 [2024-07-25 19:53:03.514281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006ed0 (107): Transport endpoint is not connected 00:22:54.211 [2024-07-25 19:53:03.515269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006ed0 (9): Bad file descriptor 00:22:54.211 [2024-07-25 19:53:03.516267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.211 [2024-07-25 19:53:03.516287] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.211 [2024-07-25 19:53:03.516305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.211 request: 00:22:54.211 { 00:22:54.211 "name": "TLSTEST", 00:22:54.211 "trtype": "tcp", 00:22:54.211 "traddr": "10.0.0.2", 00:22:54.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.211 "adrfam": "ipv4", 00:22:54.211 "trsvcid": "4420", 00:22:54.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.211 "psk": "/tmp/tmp.8yvY1aWdUm", 00:22:54.211 "method": "bdev_nvme_attach_controller", 00:22:54.211 "req_id": 1 00:22:54.211 } 00:22:54.211 Got JSON-RPC error response 00:22:54.211 response: 00:22:54.211 { 00:22:54.211 "code": -5, 00:22:54.211 "message": "Input/output error" 00:22:54.211 } 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4015466 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4015466 ']' 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4015466 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4015466 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4015466' 00:22:54.211 killing process with pid 4015466 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4015466 00:22:54.211 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.211 00:22:54.211 Latency(us) 00:22:54.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.211 =================================================================================================================== 00:22:54.211 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.211 [2024-07-25 19:53:03.561643] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:54.211 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4015466 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.468 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4015573 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4015573 /var/tmp/bdevperf.sock 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4015573 ']' 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.469 19:53:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.469 [2024-07-25 19:53:03.814561] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:54.469 [2024-07-25 19:53:03.814635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015573 ] 00:22:54.469 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.469 [2024-07-25 19:53:03.871909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.726 [2024-07-25 19:53:03.954417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.726 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.726 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:54.726 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:54.985 [2024-07-25 19:53:04.284021] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.985 [2024-07-25 19:53:04.285725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11565c0 (9): Bad file descriptor 00:22:54.985 [2024-07-25 19:53:04.286721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.985 [2024-07-25 19:53:04.286742] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.985 [2024-07-25 19:53:04.286759] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.985 request: 00:22:54.985 { 00:22:54.985 "name": "TLSTEST", 00:22:54.985 "trtype": "tcp", 00:22:54.985 "traddr": "10.0.0.2", 00:22:54.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.985 "adrfam": "ipv4", 00:22:54.985 "trsvcid": "4420", 00:22:54.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.985 "method": "bdev_nvme_attach_controller", 00:22:54.985 "req_id": 1 00:22:54.985 } 00:22:54.985 Got JSON-RPC error response 00:22:54.985 response: 00:22:54.985 { 00:22:54.985 "code": -5, 00:22:54.985 "message": "Input/output error" 00:22:54.985 } 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4015573 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4015573 ']' 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4015573 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4015573 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4015573' 00:22:54.985 killing process with pid 4015573 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4015573 00:22:54.985 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.985 00:22:54.985 Latency(us) 00:22:54.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.985 =================================================================================================================== 00:22:54.985 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.985 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4015573 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4012110 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4012110 ']' 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4012110 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4012110 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4012110' 00:22:55.244 killing process with pid 4012110 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4012110 00:22:55.244 [2024-07-25 19:53:04.585190] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:55.244 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4012110 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LQld28tgTU 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LQld28tgTU 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:55.502 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4015703 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4015703 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4015703 ']' 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.503 19:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.503 [2024-07-25 19:53:04.922273] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:55.503 [2024-07-25 19:53:04.922369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.762 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.762 [2024-07-25 19:53:04.991912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.762 [2024-07-25 19:53:05.080410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.762 [2024-07-25 19:53:05.080473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.762 [2024-07-25 19:53:05.080500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.762 [2024-07-25 19:53:05.080514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.762 [2024-07-25 19:53:05.080526] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.762 [2024-07-25 19:53:05.080557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LQld28tgTU 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LQld28tgTU 00:22:56.020 19:53:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.278 [2024-07-25 19:53:05.500781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.278 19:53:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:56.535 19:53:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:56.794 [2024-07-25 19:53:06.046284] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.794 [2024-07-25 19:53:06.046571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.794 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:57.052 malloc0 00:22:57.052 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:57.311 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:22:57.570 [2024-07-25 19:53:06.771380] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LQld28tgTU 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LQld28tgTU' 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4015919 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4015919 /var/tmp/bdevperf.sock 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4015919 ']' 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.570 19:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.570 [2024-07-25 19:53:06.834835] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:57.570 [2024-07-25 19:53:06.834916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015919 ] 00:22:57.570 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.570 [2024-07-25 19:53:06.893315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.570 [2024-07-25 19:53:06.977263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.828 19:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:57.828 19:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:57.828 19:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:22:58.088 [2024-07-25 19:53:07.313518] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.088 [2024-07-25 19:53:07.313655] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:58.088 TLSTESTn1 00:22:58.088 19:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.088 Running I/O for 10 seconds... 00:23:10.305 00:23:10.305 Latency(us) 00:23:10.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.305 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.305 Verification LBA range: start 0x0 length 0x2000 00:23:10.305 TLSTESTn1 : 10.03 3128.12 12.22 0.00 0.00 40832.66 5971.06 72235.24 00:23:10.305 =================================================================================================================== 00:23:10.305 Total : 3128.12 12.22 0.00 0.00 40832.66 5971.06 72235.24 00:23:10.305 0 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4015919 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4015919 ']' 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4015919 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4015919 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4015919' 00:23:10.305 killing process with pid 4015919 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4015919 00:23:10.305 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.305 00:23:10.305 Latency(us) 00:23:10.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.305 =================================================================================================================== 00:23:10.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.305 [2024-07-25 19:53:17.609576] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4015919 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LQld28tgTU 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LQld28tgTU 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LQld28tgTU 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LQld28tgTU 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LQld28tgTU' 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4017232 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4017232 /var/tmp/bdevperf.sock 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4017232 ']' 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.305 19:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.305 [2024-07-25 19:53:17.884868] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:10.305 [2024-07-25 19:53:17.884958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017232 ] 00:23:10.305 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.305 [2024-07-25 19:53:17.943283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.305 [2024-07-25 19:53:18.023854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:23:10.305 [2024-07-25 19:53:18.398728] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.305 [2024-07-25 19:53:18.398805] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:10.305 [2024-07-25 19:53:18.398820] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LQld28tgTU 00:23:10.305 request: 00:23:10.305 { 00:23:10.305 "name": "TLSTEST", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.305 "psk": "/tmp/tmp.LQld28tgTU", 00:23:10.305 "method": "bdev_nvme_attach_controller", 00:23:10.305 "req_id": 1 00:23:10.305 } 00:23:10.305 Got JSON-RPC error response 00:23:10.305 response: 00:23:10.305 { 00:23:10.305 "code": -1, 00:23:10.305 "message": "Operation not permitted" 00:23:10.305 } 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4017232 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4017232 ']' 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4017232 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017232 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017232' 00:23:10.305 killing process with pid 4017232 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4017232 00:23:10.305 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.305 00:23:10.305 Latency(us) 00:23:10.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.305 =================================================================================================================== 00:23:10.305 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4017232 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4015703 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4015703 ']' 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4015703 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4015703 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4015703' 00:23:10.305 killing process with pid 4015703 00:23:10.305 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4015703 00:23:10.306 [2024-07-25 19:53:18.672120] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4015703 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4017378 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4017378 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4017378 ']' 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.306 19:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.306 [2024-07-25 19:53:18.942613] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:10.306 [2024-07-25 19:53:18.942699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.306 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.306 [2024-07-25 19:53:19.006249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.306 [2024-07-25 19:53:19.089046] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.306 [2024-07-25 19:53:19.089111] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.306 [2024-07-25 19:53:19.089136] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.306 [2024-07-25 19:53:19.089148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.306 [2024-07-25 19:53:19.089159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.306 [2024-07-25 19:53:19.089193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LQld28tgTU 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LQld28tgTU 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.LQld28tgTU 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LQld28tgTU 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.306 [2024-07-25 19:53:19.439686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.306 19:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.564 [2024-07-25 19:53:19.941035] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.564 [2024-07-25 19:53:19.941295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.564 19:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.821 malloc0 00:23:10.821 19:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.079 19:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:23:11.338 [2024-07-25 19:53:20.726881] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:11.338 [2024-07-25 19:53:20.726927] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:11.338 [2024-07-25 19:53:20.726959] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:11.338 request: 00:23:11.338 { 00:23:11.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.338 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.338 "psk": "/tmp/tmp.LQld28tgTU", 00:23:11.338 "method": "nvmf_subsystem_add_host", 00:23:11.338 "req_id": 1 00:23:11.338 } 00:23:11.338 Got JSON-RPC error response 00:23:11.338 response: 00:23:11.338 { 00:23:11.338 "code": -32603, 00:23:11.338 "message": "Internal error" 00:23:11.338 } 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4017378 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4017378 ']' 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4017378 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.338 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017378 00:23:11.597 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.597 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.597 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017378' 00:23:11.597 killing process with pid 4017378 00:23:11.597 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4017378 00:23:11.597 19:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4017378 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LQld28tgTU 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4017668 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4017668 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4017668 ']' 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.597 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.855 [2024-07-25 19:53:21.069211] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:11.855 [2024-07-25 19:53:21.069299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.855 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.855 [2024-07-25 19:53:21.137402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.855 [2024-07-25 19:53:21.224781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.855 [2024-07-25 19:53:21.224845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.855 [2024-07-25 19:53:21.224872] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.855 [2024-07-25 19:53:21.224885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.855 [2024-07-25 19:53:21.224898] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.855 [2024-07-25 19:53:21.224928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LQld28tgTU 00:23:12.113 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LQld28tgTU 00:23:12.114 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.372 [2024-07-25 19:53:21.641329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.372 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.630 19:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.888 [2024-07-25 19:53:22.182799] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.888 [2024-07-25 19:53:22.183032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.888 19:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.145 malloc0 00:23:13.145 19:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.403 19:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:23:13.662 [2024-07-25 19:53:23.072054] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4017950 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4017950 /var/tmp/bdevperf.sock 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4017950 ']' 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.922 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.922 [2024-07-25 19:53:23.135406] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:13.922 [2024-07-25 19:53:23.135490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017950 ] 00:23:13.922 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.922 [2024-07-25 19:53:23.195179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.922 [2024-07-25 19:53:23.278137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.179 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.179 19:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.179 19:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:23:14.436 [2024-07-25 19:53:23.658687] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.436 [2024-07-25 19:53:23.658798] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.436 TLSTESTn1 00:23:14.436 19:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:14.695 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:14.695 "subsystems": [ 00:23:14.695 { 00:23:14.695 "subsystem": "keyring", 00:23:14.695 "config": [] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "iobuf", 00:23:14.695 "config": [ 00:23:14.695 { 00:23:14.695 "method": "iobuf_set_options", 00:23:14.695 "params": { 00:23:14.695 "small_pool_count": 8192, 00:23:14.695 "large_pool_count": 1024, 00:23:14.695 "small_bufsize": 8192, 00:23:14.695 "large_bufsize": 135168 00:23:14.695 } 00:23:14.695 } 00:23:14.695 ] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "sock", 00:23:14.695 "config": [ 00:23:14.695 { 00:23:14.695 "method": "sock_set_default_impl", 00:23:14.695 "params": { 00:23:14.695 "impl_name": "posix" 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "sock_impl_set_options", 00:23:14.695 "params": { 00:23:14.695 "impl_name": "ssl", 00:23:14.695 "recv_buf_size": 4096, 00:23:14.695 "send_buf_size": 4096, 00:23:14.695 "enable_recv_pipe": true, 00:23:14.695 "enable_quickack": false, 00:23:14.695 "enable_placement_id": 0, 00:23:14.695 "enable_zerocopy_send_server": true, 00:23:14.695 "enable_zerocopy_send_client": false, 00:23:14.695 "zerocopy_threshold": 0, 00:23:14.695 "tls_version": 0, 00:23:14.695 "enable_ktls": false 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "sock_impl_set_options", 00:23:14.695 "params": { 00:23:14.695 "impl_name": "posix", 00:23:14.695 "recv_buf_size": 2097152, 00:23:14.695 "send_buf_size": 2097152, 00:23:14.695 "enable_recv_pipe": true, 00:23:14.695 "enable_quickack": false, 00:23:14.695 "enable_placement_id": 0, 00:23:14.695 "enable_zerocopy_send_server": true, 00:23:14.695 "enable_zerocopy_send_client": false, 00:23:14.695 "zerocopy_threshold": 0, 00:23:14.695 "tls_version": 0, 00:23:14.695 "enable_ktls": false 00:23:14.695 } 00:23:14.695 } 00:23:14.695 ] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "vmd", 00:23:14.695 "config": [] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "accel", 00:23:14.695 "config": [ 00:23:14.695 { 00:23:14.695 "method": "accel_set_options", 00:23:14.695 "params": { 00:23:14.695 "small_cache_size": 128, 00:23:14.695 "large_cache_size": 16, 00:23:14.695 "task_count": 2048, 00:23:14.695 "sequence_count": 2048, 00:23:14.695 "buf_count": 2048 00:23:14.695 } 00:23:14.695 } 00:23:14.695 ] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "bdev", 00:23:14.695 "config": [ 00:23:14.695 { 00:23:14.695 "method": "bdev_set_options", 00:23:14.695 "params": { 00:23:14.695 "bdev_io_pool_size": 65535, 00:23:14.695 "bdev_io_cache_size": 256, 00:23:14.695 "bdev_auto_examine": true, 00:23:14.695 "iobuf_small_cache_size": 128, 00:23:14.695 "iobuf_large_cache_size": 16 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "bdev_raid_set_options", 00:23:14.695 "params": { 00:23:14.695 "process_window_size_kb": 1024 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "bdev_iscsi_set_options", 00:23:14.695 "params": { 00:23:14.695 "timeout_sec": 30 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "bdev_nvme_set_options", 00:23:14.695 "params": { 00:23:14.695 "action_on_timeout": "none", 00:23:14.695 "timeout_us": 0, 00:23:14.695 "timeout_admin_us": 0, 00:23:14.695 "keep_alive_timeout_ms": 10000, 00:23:14.695 "arbitration_burst": 0, 00:23:14.695 "low_priority_weight": 0, 00:23:14.695 "medium_priority_weight": 0, 00:23:14.695 "high_priority_weight": 0, 00:23:14.695 "nvme_adminq_poll_period_us": 10000, 00:23:14.695 "nvme_ioq_poll_period_us": 0, 00:23:14.695 "io_queue_requests": 0, 00:23:14.695 "delay_cmd_submit": true, 00:23:14.695 "transport_retry_count": 4, 00:23:14.695 "bdev_retry_count": 3, 00:23:14.695 "transport_ack_timeout": 0, 00:23:14.695 "ctrlr_loss_timeout_sec": 0, 00:23:14.695 "reconnect_delay_sec": 0, 00:23:14.695 "fast_io_fail_timeout_sec": 0, 00:23:14.695 "disable_auto_failback": false, 00:23:14.695 "generate_uuids": false, 00:23:14.695 "transport_tos": 0, 00:23:14.695 "nvme_error_stat": false, 00:23:14.695 "rdma_srq_size": 0, 00:23:14.695 "io_path_stat": false, 00:23:14.695 "allow_accel_sequence": false, 00:23:14.695 "rdma_max_cq_size": 0, 00:23:14.695 "rdma_cm_event_timeout_ms": 0, 00:23:14.695 "dhchap_digests": [ 00:23:14.695 "sha256", 00:23:14.695 "sha384", 00:23:14.695 "sha512" 00:23:14.695 ], 00:23:14.695 "dhchap_dhgroups": [ 00:23:14.695 "null", 00:23:14.695 "ffdhe2048", 00:23:14.695 "ffdhe3072", 00:23:14.695 "ffdhe4096", 00:23:14.695 "ffdhe6144", 00:23:14.695 "ffdhe8192" 00:23:14.695 ] 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "bdev_nvme_set_hotplug", 00:23:14.695 "params": { 00:23:14.695 "period_us": 100000, 00:23:14.695 "enable": false 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "bdev_malloc_create", 00:23:14.695 "params": { 00:23:14.695 "name": "malloc0", 00:23:14.695 "num_blocks": 8192, 00:23:14.695 "block_size": 4096, 00:23:14.695 "physical_block_size": 4096, 00:23:14.695 "uuid": "16204b05-c9a8-40f3-b6a0-eb0451e830f1", 00:23:14.695 "optimal_io_boundary": 0 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "bdev_wait_for_examine" 00:23:14.695 } 00:23:14.695 ] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "nbd", 00:23:14.695 "config": [] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "scheduler", 00:23:14.695 "config": [ 00:23:14.695 { 00:23:14.695 "method": "framework_set_scheduler", 00:23:14.695 "params": { 00:23:14.695 "name": "static" 00:23:14.695 } 00:23:14.695 } 00:23:14.695 ] 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "subsystem": "nvmf", 00:23:14.695 "config": [ 00:23:14.695 { 00:23:14.695 "method": "nvmf_set_config", 00:23:14.695 "params": { 00:23:14.695 "discovery_filter": "match_any", 00:23:14.695 "admin_cmd_passthru": { 00:23:14.695 "identify_ctrlr": false 00:23:14.695 } 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "nvmf_set_max_subsystems", 00:23:14.695 "params": { 00:23:14.695 "max_subsystems": 1024 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "nvmf_set_crdt", 00:23:14.695 "params": { 00:23:14.695 "crdt1": 0, 00:23:14.695 "crdt2": 0, 00:23:14.695 "crdt3": 0 00:23:14.695 } 00:23:14.695 }, 00:23:14.695 { 00:23:14.695 "method": "nvmf_create_transport", 00:23:14.695 "params": { 00:23:14.695 "trtype": "TCP", 00:23:14.695 "max_queue_depth": 128, 00:23:14.695 "max_io_qpairs_per_ctrlr": 127, 00:23:14.695 "in_capsule_data_size": 4096, 00:23:14.695 "max_io_size": 131072, 00:23:14.695 "io_unit_size": 131072, 00:23:14.695 "max_aq_depth": 128, 00:23:14.695 "num_shared_buffers": 511, 00:23:14.695 "buf_cache_size": 4294967295, 00:23:14.695 "dif_insert_or_strip": false, 00:23:14.695 "zcopy": false, 00:23:14.695 "c2h_success": false, 00:23:14.695 "sock_priority": 0, 00:23:14.695 "abort_timeout_sec": 1, 00:23:14.696 "ack_timeout": 0, 00:23:14.696 "data_wr_pool_size": 0 00:23:14.696 } 00:23:14.696 }, 00:23:14.696 { 00:23:14.696 "method": "nvmf_create_subsystem", 00:23:14.696 "params": { 00:23:14.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.696 "allow_any_host": false, 00:23:14.696 "serial_number": "SPDK00000000000001", 00:23:14.696 "model_number": "SPDK bdev Controller", 00:23:14.696 "max_namespaces": 10, 00:23:14.696 "min_cntlid": 1, 00:23:14.696 "max_cntlid": 65519, 00:23:14.696 "ana_reporting": false 00:23:14.696 } 00:23:14.696 }, 00:23:14.696 { 00:23:14.696 "method": "nvmf_subsystem_add_host", 00:23:14.696 "params": { 00:23:14.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.696 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.696 "psk": "/tmp/tmp.LQld28tgTU" 00:23:14.696 } 00:23:14.696 }, 00:23:14.696 { 00:23:14.696 "method": "nvmf_subsystem_add_ns", 00:23:14.696 "params": { 00:23:14.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.696 "namespace": { 00:23:14.696 "nsid": 1, 00:23:14.696 "bdev_name": "malloc0", 00:23:14.696 "nguid": "16204B05C9A840F3B6A0EB0451E830F1", 00:23:14.696 "uuid": "16204b05-c9a8-40f3-b6a0-eb0451e830f1", 00:23:14.696 "no_auto_visible": false 00:23:14.696 } 00:23:14.696 } 00:23:14.696 }, 00:23:14.696 { 00:23:14.696 "method": "nvmf_subsystem_add_listener", 00:23:14.696 "params": { 00:23:14.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.696 "listen_address": { 00:23:14.696 "trtype": "TCP", 00:23:14.696 "adrfam": "IPv4", 00:23:14.696 "traddr": "10.0.0.2", 00:23:14.696 "trsvcid": "4420" 00:23:14.696 }, 00:23:14.696 "secure_channel": true 00:23:14.696 } 00:23:14.696 } 00:23:14.696 ] 00:23:14.696 } 00:23:14.696 ] 00:23:14.696 }' 00:23:14.696 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.264 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:15.264 "subsystems": [ 00:23:15.264 { 00:23:15.264 "subsystem": "keyring", 00:23:15.264 "config": [] 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "subsystem": "iobuf", 00:23:15.264 "config": [ 00:23:15.264 { 00:23:15.264 "method": "iobuf_set_options", 00:23:15.264 "params": { 00:23:15.264 "small_pool_count": 8192, 00:23:15.264 "large_pool_count": 1024, 00:23:15.264 "small_bufsize": 8192, 00:23:15.264 "large_bufsize": 135168 00:23:15.264 } 00:23:15.264 } 00:23:15.264 ] 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "subsystem": "sock", 00:23:15.264 "config": [ 00:23:15.264 { 00:23:15.264 "method": "sock_set_default_impl", 00:23:15.264 "params": { 00:23:15.264 "impl_name": "posix" 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "sock_impl_set_options", 00:23:15.264 "params": { 00:23:15.264 "impl_name": "ssl", 00:23:15.264 "recv_buf_size": 4096, 00:23:15.264 "send_buf_size": 4096, 00:23:15.264 "enable_recv_pipe": true, 00:23:15.264 "enable_quickack": false, 00:23:15.264 "enable_placement_id": 0, 00:23:15.264 "enable_zerocopy_send_server": true, 00:23:15.264 "enable_zerocopy_send_client": false, 00:23:15.264 "zerocopy_threshold": 0, 00:23:15.264 "tls_version": 0, 00:23:15.264 "enable_ktls": false 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "sock_impl_set_options", 00:23:15.264 "params": { 00:23:15.264 "impl_name": "posix", 00:23:15.264 "recv_buf_size": 2097152, 00:23:15.264 "send_buf_size": 2097152, 00:23:15.264 "enable_recv_pipe": true, 00:23:15.264 "enable_quickack": false, 00:23:15.264 "enable_placement_id": 0, 00:23:15.264 "enable_zerocopy_send_server": true, 00:23:15.264 "enable_zerocopy_send_client": false, 00:23:15.264 "zerocopy_threshold": 0, 00:23:15.264 "tls_version": 0, 00:23:15.264 "enable_ktls": false 00:23:15.264 } 00:23:15.264 } 00:23:15.264 ] 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "subsystem": "vmd", 00:23:15.264 "config": [] 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "subsystem": "accel", 00:23:15.264 "config": [ 00:23:15.264 { 00:23:15.264 "method": "accel_set_options", 00:23:15.264 "params": { 00:23:15.264 "small_cache_size": 128, 00:23:15.264 "large_cache_size": 16, 00:23:15.264 "task_count": 2048, 00:23:15.264 "sequence_count": 2048, 00:23:15.264 "buf_count": 2048 00:23:15.264 } 00:23:15.264 } 00:23:15.264 ] 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "subsystem": "bdev", 00:23:15.264 "config": [ 00:23:15.264 { 00:23:15.264 "method": "bdev_set_options", 00:23:15.264 "params": { 00:23:15.264 "bdev_io_pool_size": 65535, 00:23:15.264 "bdev_io_cache_size": 256, 00:23:15.264 "bdev_auto_examine": true, 00:23:15.264 "iobuf_small_cache_size": 128, 00:23:15.264 "iobuf_large_cache_size": 16 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "bdev_raid_set_options", 00:23:15.264 "params": { 00:23:15.264 "process_window_size_kb": 1024 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "bdev_iscsi_set_options", 00:23:15.264 "params": { 00:23:15.264 "timeout_sec": 30 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "bdev_nvme_set_options", 00:23:15.264 "params": { 00:23:15.264 "action_on_timeout": "none", 00:23:15.264 "timeout_us": 0, 00:23:15.264 "timeout_admin_us": 0, 00:23:15.264 "keep_alive_timeout_ms": 10000, 00:23:15.264 "arbitration_burst": 0, 00:23:15.264 "low_priority_weight": 0, 00:23:15.264 "medium_priority_weight": 0, 00:23:15.264 "high_priority_weight": 0, 00:23:15.264 "nvme_adminq_poll_period_us": 10000, 00:23:15.264 "nvme_ioq_poll_period_us": 0, 00:23:15.264 "io_queue_requests": 512, 00:23:15.264 "delay_cmd_submit": true, 00:23:15.264 "transport_retry_count": 4, 00:23:15.264 "bdev_retry_count": 3, 00:23:15.264 "transport_ack_timeout": 0, 00:23:15.264 "ctrlr_loss_timeout_sec": 0, 00:23:15.264 "reconnect_delay_sec": 0, 00:23:15.264 "fast_io_fail_timeout_sec": 0, 00:23:15.264 "disable_auto_failback": false, 00:23:15.264 "generate_uuids": false, 00:23:15.264 "transport_tos": 0, 00:23:15.264 "nvme_error_stat": false, 00:23:15.264 "rdma_srq_size": 0, 00:23:15.264 "io_path_stat": false, 00:23:15.264 "allow_accel_sequence": false, 00:23:15.264 "rdma_max_cq_size": 0, 00:23:15.264 "rdma_cm_event_timeout_ms": 0, 00:23:15.264 "dhchap_digests": [ 00:23:15.264 "sha256", 00:23:15.264 "sha384", 00:23:15.264 "sha512" 00:23:15.264 ], 00:23:15.264 "dhchap_dhgroups": [ 00:23:15.264 "null", 00:23:15.264 "ffdhe2048", 00:23:15.264 "ffdhe3072", 00:23:15.264 "ffdhe4096", 00:23:15.264 "ffdhe6144", 00:23:15.264 "ffdhe8192" 00:23:15.264 ] 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "bdev_nvme_attach_controller", 00:23:15.264 "params": { 00:23:15.264 "name": "TLSTEST", 00:23:15.264 "trtype": "TCP", 00:23:15.264 "adrfam": "IPv4", 00:23:15.264 "traddr": "10.0.0.2", 00:23:15.264 "trsvcid": "4420", 00:23:15.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.264 "prchk_reftag": false, 00:23:15.264 "prchk_guard": false, 00:23:15.264 "ctrlr_loss_timeout_sec": 0, 00:23:15.264 "reconnect_delay_sec": 0, 00:23:15.264 "fast_io_fail_timeout_sec": 0, 00:23:15.264 "psk": "/tmp/tmp.LQld28tgTU", 00:23:15.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.264 "hdgst": false, 00:23:15.264 "ddgst": false 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "bdev_nvme_set_hotplug", 00:23:15.264 "params": { 00:23:15.264 "period_us": 100000, 00:23:15.264 "enable": false 00:23:15.264 } 00:23:15.264 }, 00:23:15.264 { 00:23:15.264 "method": "bdev_wait_for_examine" 00:23:15.265 } 00:23:15.265 ] 00:23:15.265 }, 00:23:15.265 { 00:23:15.265 "subsystem": "nbd", 00:23:15.265 "config": [] 00:23:15.265 } 00:23:15.265 ] 00:23:15.265 }' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4017950 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4017950 ']' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4017950 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017950 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017950' 00:23:15.265 killing process with pid 4017950 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4017950 00:23:15.265 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.265 00:23:15.265 Latency(us) 00:23:15.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.265 =================================================================================================================== 00:23:15.265 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.265 [2024-07-25 19:53:24.417760] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4017950 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4017668 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4017668 ']' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4017668 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017668 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017668' 00:23:15.265 killing process with pid 4017668 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4017668 00:23:15.265 [2024-07-25 19:53:24.654287] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:15.265 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4017668 00:23:15.555 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:15.555 19:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.555 19:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:15.555 "subsystems": [ 00:23:15.555 { 00:23:15.555 "subsystem": "keyring", 00:23:15.555 "config": [] 00:23:15.555 }, 00:23:15.555 { 00:23:15.555 "subsystem": "iobuf", 00:23:15.555 "config": [ 00:23:15.555 { 00:23:15.555 "method": "iobuf_set_options", 00:23:15.556 "params": { 00:23:15.556 "small_pool_count": 8192, 00:23:15.556 "large_pool_count": 1024, 00:23:15.556 "small_bufsize": 8192, 00:23:15.556 "large_bufsize": 135168 00:23:15.556 } 00:23:15.556 } 00:23:15.556 ] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "sock", 00:23:15.556 "config": [ 00:23:15.556 { 00:23:15.556 "method": "sock_set_default_impl", 00:23:15.556 "params": { 00:23:15.556 "impl_name": "posix" 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "sock_impl_set_options", 00:23:15.556 "params": { 00:23:15.556 "impl_name": "ssl", 00:23:15.556 "recv_buf_size": 4096, 00:23:15.556 "send_buf_size": 4096, 00:23:15.556 "enable_recv_pipe": true, 00:23:15.556 "enable_quickack": false, 00:23:15.556 "enable_placement_id": 0, 00:23:15.556 "enable_zerocopy_send_server": true, 00:23:15.556 "enable_zerocopy_send_client": false, 00:23:15.556 "zerocopy_threshold": 0, 00:23:15.556 "tls_version": 0, 00:23:15.556 "enable_ktls": false 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "sock_impl_set_options", 00:23:15.556 "params": { 00:23:15.556 "impl_name": "posix", 00:23:15.556 "recv_buf_size": 2097152, 00:23:15.556 "send_buf_size": 2097152, 00:23:15.556 "enable_recv_pipe": true, 00:23:15.556 "enable_quickack": false, 00:23:15.556 "enable_placement_id": 0, 00:23:15.556 "enable_zerocopy_send_server": true, 00:23:15.556 "enable_zerocopy_send_client": false, 00:23:15.556 "zerocopy_threshold": 0, 00:23:15.556 "tls_version": 0, 00:23:15.556 "enable_ktls": false 00:23:15.556 } 00:23:15.556 } 00:23:15.556 ] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "vmd", 00:23:15.556 "config": [] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "accel", 00:23:15.556 "config": [ 00:23:15.556 { 00:23:15.556 "method": "accel_set_options", 00:23:15.556 "params": { 00:23:15.556 "small_cache_size": 128, 00:23:15.556 "large_cache_size": 16, 00:23:15.556 "task_count": 2048, 00:23:15.556 "sequence_count": 2048, 00:23:15.556 "buf_count": 2048 00:23:15.556 } 00:23:15.556 } 00:23:15.556 ] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "bdev", 00:23:15.556 "config": [ 00:23:15.556 { 00:23:15.556 "method": "bdev_set_options", 00:23:15.556 "params": { 00:23:15.556 "bdev_io_pool_size": 65535, 00:23:15.556 "bdev_io_cache_size": 256, 00:23:15.556 "bdev_auto_examine": true, 00:23:15.556 "iobuf_small_cache_size": 128, 00:23:15.556 "iobuf_large_cache_size": 16 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "bdev_raid_set_options", 00:23:15.556 "params": { 00:23:15.556 "process_window_size_kb": 1024 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "bdev_iscsi_set_options", 00:23:15.556 "params": { 00:23:15.556 "timeout_sec": 30 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "bdev_nvme_set_options", 00:23:15.556 "params": { 00:23:15.556 "action_on_timeout": "none", 00:23:15.556 "timeout_us": 0, 00:23:15.556 "timeout_admin_us": 0, 00:23:15.556 "keep_alive_timeout_ms": 10000, 00:23:15.556 "arbitration_burst": 0, 00:23:15.556 "low_priority_weight": 0, 00:23:15.556 "medium_priority_weight": 0, 00:23:15.556 "high_priority_weight": 0, 00:23:15.556 "nvme_adminq_poll_period_us": 10000, 00:23:15.556 "nvme_ioq_poll_period_us": 0, 00:23:15.556 "io_queue_requests": 0, 00:23:15.556 "delay_cmd_submit": true, 00:23:15.556 "transport_retry_count": 4, 00:23:15.556 "bdev_retry_count": 3, 00:23:15.556 "transport_ack_timeout": 0, 00:23:15.556 "ctrlr_loss_timeout_sec": 0, 00:23:15.556 "reconnect_delay_sec": 0, 00:23:15.556 "fast_io_fail_timeout_sec": 0, 00:23:15.556 "disable_auto_failback": false, 00:23:15.556 "generate_uuids": false, 00:23:15.556 "transport_tos": 0, 00:23:15.556 "nvme_error_stat": false, 00:23:15.556 "rdma_srq_size": 0, 00:23:15.556 "io_path_stat": false, 00:23:15.556 "allow_accel_sequence": false, 00:23:15.556 "rdma_max_cq_size": 0, 00:23:15.556 "rdma_cm_event_timeout_ms": 0, 00:23:15.556 "dhchap_digests": [ 00:23:15.556 "sha256", 00:23:15.556 "sha384", 00:23:15.556 "sha512" 00:23:15.556 ], 00:23:15.556 "dhchap_dhgroups": [ 00:23:15.556 "null", 00:23:15.556 "ffdhe2048", 00:23:15.556 "ffdhe3072", 00:23:15.556 "ffdhe4096", 00:23:15.556 "ffdhe6144", 00:23:15.556 "ffdhe8192" 00:23:15.556 ] 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "bdev_nvme_set_hotplug", 00:23:15.556 "params": { 00:23:15.556 "period_us": 100000, 00:23:15.556 "enable": false 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "bdev_malloc_create", 00:23:15.556 "params": { 00:23:15.556 "name": "malloc0", 00:23:15.556 "num_blocks": 8192, 00:23:15.556 "block_size": 4096, 00:23:15.556 "physical_block_size": 4096, 00:23:15.556 "uuid": "16204b05-c9a8-40f3-b6a0-eb0451e830f1", 00:23:15.556 "optimal_io_boundary": 0 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "bdev_wait_for_examine" 00:23:15.556 } 00:23:15.556 ] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "nbd", 00:23:15.556 "config": [] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "scheduler", 00:23:15.556 "config": [ 00:23:15.556 { 00:23:15.556 "method": "framework_set_scheduler", 00:23:15.556 "params": { 00:23:15.556 "name": "static" 00:23:15.556 } 00:23:15.556 } 00:23:15.556 ] 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "subsystem": "nvmf", 00:23:15.556 "config": [ 00:23:15.556 { 00:23:15.556 "method": "nvmf_set_config", 00:23:15.556 "params": { 00:23:15.556 "discovery_filter": "match_any", 00:23:15.556 "admin_cmd_passthru": { 00:23:15.556 "identify_ctrlr": false 00:23:15.556 } 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "nvmf_set_max_subsystems", 00:23:15.556 "params": { 00:23:15.556 "max_subsystems": 1024 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "nvmf_set_crdt", 00:23:15.556 "params": { 00:23:15.556 "crdt1": 0, 00:23:15.556 "crdt2": 0, 00:23:15.556 "crdt3": 0 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "nvmf_create_transport", 00:23:15.556 "params": { 00:23:15.556 "trtype": "TCP", 00:23:15.556 "max_queue_depth": 128, 00:23:15.556 "max_io_qpairs_per_ctrlr": 127, 00:23:15.556 "in_capsule_data_size": 4096, 00:23:15.556 "max_io_size": 131072, 00:23:15.556 "io_unit_size": 131072, 00:23:15.556 "max_aq_depth": 128, 00:23:15.556 "num_shared_buffers": 511, 00:23:15.556 "buf_cache_size": 4294967295, 00:23:15.556 "dif_insert_or_strip": false, 00:23:15.556 "zcopy": false, 00:23:15.556 "c2h_success": false, 00:23:15.556 "sock_priority": 0, 00:23:15.556 "abort_timeout_sec": 1, 00:23:15.556 "ack_timeout": 0, 00:23:15.556 "data_wr_pool_size": 0 00:23:15.556 } 00:23:15.556 }, 00:23:15.556 { 00:23:15.556 "method": "nvmf_create_subsystem", 00:23:15.556 "params": { 00:23:15.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.556 "allow_any_host": false, 00:23:15.556 "serial_number": "SPDK00000000000001", 00:23:15.556 "model_number": "SPDK bdev Controller", 00:23:15.556 "max_namespaces": 10, 00:23:15.557 "min_cntlid": 1, 00:23:15.557 "max_cntlid": 65519, 00:23:15.557 "ana_reporting": false 00:23:15.557 } 00:23:15.557 }, 00:23:15.557 { 00:23:15.557 "method": "nvmf_subsystem_add_host", 00:23:15.557 "params": { 00:23:15.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.557 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.557 "psk": "/tmp/tmp.LQld28tgTU" 00:23:15.557 } 00:23:15.557 }, 00:23:15.557 { 00:23:15.557 "method": "nvmf_subsystem_add_ns", 00:23:15.557 "params": { 00:23:15.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.557 "namespace": { 00:23:15.557 "nsid": 1, 00:23:15.557 "bdev_name": "malloc0", 00:23:15.557 "nguid": "16204B05C9A840F3B6A0EB0451E830F1", 00:23:15.557 "uuid": "16204b05-c9a8-40f3-b6a0-eb0451e830f1", 00:23:15.557 "no_auto_visible": false 00:23:15.557 } 00:23:15.557 } 00:23:15.557 }, 00:23:15.557 { 00:23:15.557 "method": "nvmf_subsystem_add_listener", 00:23:15.557 "params": { 00:23:15.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.557 "listen_address": { 00:23:15.557 "trtype": "TCP", 00:23:15.557 "adrfam": "IPv4", 00:23:15.557 "traddr": "10.0.0.2", 00:23:15.557 "trsvcid": "4420" 00:23:15.557 }, 00:23:15.557 "secure_channel": true 00:23:15.557 } 00:23:15.557 } 00:23:15.557 ] 00:23:15.557 } 00:23:15.557 ] 00:23:15.557 }' 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4018111 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4018111 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4018111 ']' 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.557 19:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.557 [2024-07-25 19:53:24.961435] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:15.557 [2024-07-25 19:53:24.961536] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.817 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.817 [2024-07-25 19:53:25.031785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.817 [2024-07-25 19:53:25.117625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.817 [2024-07-25 19:53:25.117689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.817 [2024-07-25 19:53:25.117715] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.817 [2024-07-25 19:53:25.117730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.817 [2024-07-25 19:53:25.117743] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.817 [2024-07-25 19:53:25.117830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.077 [2024-07-25 19:53:25.356125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.077 [2024-07-25 19:53:25.372073] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:16.077 [2024-07-25 19:53:25.388131] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.077 [2024-07-25 19:53:25.397270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4018265 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4018265 /var/tmp/bdevperf.sock 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4018265 ']' 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.643 19:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:16.643 "subsystems": [ 00:23:16.643 { 00:23:16.643 "subsystem": "keyring", 00:23:16.643 "config": [] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "iobuf", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "iobuf_set_options", 00:23:16.643 "params": { 00:23:16.643 "small_pool_count": 8192, 00:23:16.643 "large_pool_count": 1024, 00:23:16.643 "small_bufsize": 8192, 00:23:16.643 "large_bufsize": 135168 00:23:16.643 } 00:23:16.643 } 00:23:16.643 ] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "sock", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "sock_set_default_impl", 00:23:16.643 "params": { 00:23:16.643 "impl_name": "posix" 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "sock_impl_set_options", 00:23:16.643 "params": { 00:23:16.643 "impl_name": "ssl", 00:23:16.643 "recv_buf_size": 4096, 00:23:16.643 "send_buf_size": 4096, 00:23:16.643 "enable_recv_pipe": true, 00:23:16.643 "enable_quickack": false, 00:23:16.643 "enable_placement_id": 0, 00:23:16.643 "enable_zerocopy_send_server": true, 00:23:16.643 "enable_zerocopy_send_client": false, 00:23:16.643 "zerocopy_threshold": 0, 00:23:16.643 "tls_version": 0, 00:23:16.643 "enable_ktls": false 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "sock_impl_set_options", 00:23:16.643 "params": { 00:23:16.643 "impl_name": "posix", 00:23:16.643 "recv_buf_size": 2097152, 00:23:16.643 "send_buf_size": 2097152, 00:23:16.643 "enable_recv_pipe": true, 00:23:16.643 "enable_quickack": false, 00:23:16.643 "enable_placement_id": 0, 00:23:16.643 "enable_zerocopy_send_server": true, 00:23:16.643 "enable_zerocopy_send_client": false, 00:23:16.643 "zerocopy_threshold": 0, 00:23:16.643 "tls_version": 0, 00:23:16.643 "enable_ktls": false 00:23:16.643 } 00:23:16.643 } 00:23:16.643 ] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "vmd", 00:23:16.643 "config": [] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "accel", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "accel_set_options", 00:23:16.643 "params": { 00:23:16.643 "small_cache_size": 128, 00:23:16.643 "large_cache_size": 16, 00:23:16.643 "task_count": 2048, 00:23:16.643 "sequence_count": 2048, 00:23:16.643 "buf_count": 2048 00:23:16.643 } 00:23:16.643 } 00:23:16.643 ] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "bdev", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "bdev_set_options", 00:23:16.643 "params": { 00:23:16.643 "bdev_io_pool_size": 65535, 00:23:16.643 "bdev_io_cache_size": 256, 00:23:16.643 "bdev_auto_examine": true, 00:23:16.643 "iobuf_small_cache_size": 128, 00:23:16.643 "iobuf_large_cache_size": 16 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_raid_set_options", 00:23:16.643 "params": { 00:23:16.643 "process_window_size_kb": 1024 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_iscsi_set_options", 00:23:16.643 "params": { 00:23:16.643 "timeout_sec": 30 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_nvme_set_options", 00:23:16.643 "params": { 00:23:16.643 "action_on_timeout": "none", 00:23:16.643 "timeout_us": 0, 00:23:16.643 "timeout_admin_us": 0, 00:23:16.643 "keep_alive_timeout_ms": 10000, 00:23:16.643 "arbitration_burst": 0, 00:23:16.643 "low_priority_weight": 0, 00:23:16.643 "medium_priority_weight": 0, 00:23:16.643 "high_priority_weight": 0, 00:23:16.643 "nvme_adminq_poll_period_us": 10000, 00:23:16.643 "nvme_ioq_poll_period_us": 0, 00:23:16.643 "io_queue_requests": 512, 00:23:16.643 "delay_cmd_submit": true, 00:23:16.643 "transport_retry_count": 4, 00:23:16.643 "bdev_retry_count": 3, 00:23:16.643 "transport_ack_timeout": 0, 00:23:16.643 "ctrlr_loss_timeout_sec": 0, 00:23:16.643 "reconnect_delay_sec": 0, 00:23:16.643 "fast_io_fail_timeout_sec": 0, 00:23:16.643 "disable_auto_failback": false, 00:23:16.643 "generate_uuids": false, 00:23:16.643 "transport_tos": 0, 00:23:16.643 "nvme_error_stat": false, 00:23:16.643 "rdma_srq_size": 0, 00:23:16.643 "io_path_stat": false, 00:23:16.643 "allow_accel_sequence": false, 00:23:16.643 "rdma_max_cq_size": 0, 00:23:16.643 "rdma_cm_event_timeout_ms": 0, 00:23:16.643 "dhchap_digests": [ 00:23:16.643 "sha256", 00:23:16.643 "sha384", 00:23:16.643 "sha512" 00:23:16.644 ], 00:23:16.644 "dhchap_dhgroups": [ 00:23:16.644 "null", 00:23:16.644 "ffdhe2048", 00:23:16.644 "ffdhe3072", 00:23:16.644 "ffdhe4096", 00:23:16.644 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.644 he6144", 00:23:16.644 "ffdhe8192" 00:23:16.644 ] 00:23:16.644 } 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "method": "bdev_nvme_attach_controller", 00:23:16.644 "params": { 00:23:16.644 "name": "TLSTEST", 00:23:16.644 "trtype": "TCP", 00:23:16.644 "adrfam": "IPv4", 00:23:16.644 "traddr": "10.0.0.2", 00:23:16.644 "trsvcid": "4420", 00:23:16.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.644 "prchk_reftag": false, 00:23:16.644 "prchk_guard": false, 00:23:16.644 "ctrlr_loss_timeout_sec": 0, 00:23:16.644 "reconnect_delay_sec": 0, 00:23:16.644 "fast_io_fail_timeout_sec": 0, 00:23:16.644 "psk": "/tmp/tmp.LQld28tgTU", 00:23:16.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.644 "hdgst": false, 00:23:16.644 "ddgst": false 00:23:16.644 } 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "method": "bdev_nvme_set_hotplug", 00:23:16.644 "params": { 00:23:16.644 "period_us": 100000, 00:23:16.644 "enable": false 00:23:16.644 } 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "method": "bdev_wait_for_examine" 00:23:16.644 } 00:23:16.644 ] 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "subsystem": "nbd", 00:23:16.644 "config": [] 00:23:16.644 } 00:23:16.644 ] 00:23:16.644 }' 00:23:16.644 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.644 19:53:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.644 [2024-07-25 19:53:26.007363] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:16.644 [2024-07-25 19:53:26.007463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018265 ] 00:23:16.644 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.644 [2024-07-25 19:53:26.066051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.901 [2024-07-25 19:53:26.152736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.901 [2024-07-25 19:53:26.322669] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.901 [2024-07-25 19:53:26.322807] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:17.836 19:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.836 19:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:17.836 19:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.837 Running I/O for 10 seconds... 00:23:27.821 00:23:27.821 Latency(us) 00:23:27.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.821 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.821 Verification LBA range: start 0x0 length 0x2000 00:23:27.821 TLSTESTn1 : 10.02 2361.90 9.23 0.00 0.00 54086.72 10388.67 49321.91 00:23:27.821 =================================================================================================================== 00:23:27.821 Total : 2361.90 9.23 0.00 0.00 54086.72 10388.67 49321.91 00:23:27.821 0 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4018265 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4018265 ']' 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4018265 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4018265 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4018265' 00:23:27.821 killing process with pid 4018265 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4018265 00:23:27.821 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.821 00:23:27.821 Latency(us) 00:23:27.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.821 =================================================================================================================== 00:23:27.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.821 [2024-07-25 19:53:37.195266] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.821 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4018265 00:23:28.081 19:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4018111 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4018111 ']' 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4018111 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4018111 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4018111' 00:23:28.082 killing process with pid 4018111 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4018111 00:23:28.082 [2024-07-25 19:53:37.449235] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.082 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4018111 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4019594 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4019594 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4019594 ']' 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.340 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.341 19:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.341 [2024-07-25 19:53:37.754488] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:28.341 [2024-07-25 19:53:37.754581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.599 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.599 [2024-07-25 19:53:37.824797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.599 [2024-07-25 19:53:37.917629] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.599 [2024-07-25 19:53:37.917694] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.599 [2024-07-25 19:53:37.917721] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.599 [2024-07-25 19:53:37.917735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.599 [2024-07-25 19:53:37.917747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.599 [2024-07-25 19:53:37.917778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LQld28tgTU 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LQld28tgTU 00:23:28.858 19:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.117 [2024-07-25 19:53:38.288967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.117 19:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.376 19:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.634 [2024-07-25 19:53:38.822440] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.634 [2024-07-25 19:53:38.822675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.634 19:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.893 malloc0 00:23:29.893 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.152 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LQld28tgTU 00:23:30.411 [2024-07-25 19:53:39.620379] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4019873 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4019873 /var/tmp/bdevperf.sock 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4019873 ']' 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.411 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.411 [2024-07-25 19:53:39.681649] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:30.411 [2024-07-25 19:53:39.681730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019873 ] 00:23:30.411 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.411 [2024-07-25 19:53:39.744729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.411 [2024-07-25 19:53:39.835583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.670 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.670 19:53:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:30.670 19:53:39 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LQld28tgTU 00:23:30.928 19:53:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:31.187 [2024-07-25 19:53:40.460123] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.187 nvme0n1 00:23:31.187 19:53:40 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.447 Running I/O for 1 seconds... 00:23:32.387 00:23:32.387 Latency(us) 00:23:32.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.387 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.387 Verification LBA range: start 0x0 length 0x2000 00:23:32.387 nvme0n1 : 1.02 3191.81 12.47 0.00 0.00 39713.15 7767.23 47768.46 00:23:32.387 =================================================================================================================== 00:23:32.387 Total : 3191.81 12.47 0.00 0.00 39713.15 7767.23 47768.46 00:23:32.387 0 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4019873 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4019873 ']' 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4019873 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4019873 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4019873' 00:23:32.387 killing process with pid 4019873 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4019873 00:23:32.387 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.387 00:23:32.387 Latency(us) 00:23:32.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.387 =================================================================================================================== 00:23:32.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.387 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4019873 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4019594 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4019594 ']' 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4019594 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4019594 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4019594' 00:23:32.646 killing process with pid 4019594 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4019594 00:23:32.646 [2024-07-25 19:53:41.971675] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:32.646 19:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4019594 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4020156 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4020156 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4020156 ']' 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.905 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.905 [2024-07-25 19:53:42.237699] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:32.905 [2024-07-25 19:53:42.237788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.905 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.905 [2024-07-25 19:53:42.304777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.166 [2024-07-25 19:53:42.399053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.166 [2024-07-25 19:53:42.399114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.166 [2024-07-25 19:53:42.399129] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.166 [2024-07-25 19:53:42.399142] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.166 [2024-07-25 19:53:42.399153] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.166 [2024-07-25 19:53:42.399185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.166 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.166 [2024-07-25 19:53:42.548676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.166 malloc0 00:23:33.166 [2024-07-25 19:53:42.581281] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.166 [2024-07-25 19:53:42.581571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4020294 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4020294 /var/tmp/bdevperf.sock 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4020294 ']' 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.427 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.427 [2024-07-25 19:53:42.652250] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:33.427 [2024-07-25 19:53:42.652313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020294 ] 00:23:33.427 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.427 [2024-07-25 19:53:42.716182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.427 [2024-07-25 19:53:42.808507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.685 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.685 19:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.685 19:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LQld28tgTU 00:23:33.943 19:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:34.201 [2024-07-25 19:53:43.376636] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.201 nvme0n1 00:23:34.201 19:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.201 Running I/O for 1 seconds... 00:23:35.578 00:23:35.578 Latency(us) 00:23:35.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.578 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.578 Verification LBA range: start 0x0 length 0x2000 00:23:35.578 nvme0n1 : 1.02 3342.51 13.06 0.00 0.00 37946.96 8349.77 43108.12 00:23:35.578 =================================================================================================================== 00:23:35.578 Total : 3342.51 13.06 0.00 0.00 37946.96 8349.77 43108.12 00:23:35.578 0 00:23:35.578 19:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:35.578 19:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.578 19:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.578 19:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.578 19:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:35.578 "subsystems": [ 00:23:35.578 { 00:23:35.578 "subsystem": "keyring", 00:23:35.578 "config": [ 00:23:35.578 { 00:23:35.578 "method": "keyring_file_add_key", 00:23:35.578 "params": { 00:23:35.578 "name": "key0", 00:23:35.578 "path": "/tmp/tmp.LQld28tgTU" 00:23:35.578 } 00:23:35.578 } 00:23:35.578 ] 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "subsystem": "iobuf", 00:23:35.578 "config": [ 00:23:35.578 { 00:23:35.578 "method": "iobuf_set_options", 00:23:35.578 "params": { 00:23:35.578 "small_pool_count": 8192, 00:23:35.578 "large_pool_count": 1024, 00:23:35.578 "small_bufsize": 8192, 00:23:35.578 "large_bufsize": 135168 00:23:35.578 } 00:23:35.578 } 00:23:35.578 ] 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "subsystem": "sock", 00:23:35.578 "config": [ 00:23:35.578 { 00:23:35.578 "method": "sock_set_default_impl", 00:23:35.578 "params": { 00:23:35.578 "impl_name": "posix" 00:23:35.578 } 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "method": "sock_impl_set_options", 00:23:35.578 "params": { 00:23:35.578 "impl_name": "ssl", 00:23:35.578 "recv_buf_size": 4096, 00:23:35.578 "send_buf_size": 4096, 00:23:35.578 "enable_recv_pipe": true, 00:23:35.578 "enable_quickack": false, 00:23:35.578 "enable_placement_id": 0, 00:23:35.578 "enable_zerocopy_send_server": true, 00:23:35.578 "enable_zerocopy_send_client": false, 00:23:35.578 "zerocopy_threshold": 0, 00:23:35.578 "tls_version": 0, 00:23:35.578 "enable_ktls": false 00:23:35.578 } 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "method": "sock_impl_set_options", 00:23:35.578 "params": { 00:23:35.578 "impl_name": "posix", 00:23:35.578 "recv_buf_size": 2097152, 00:23:35.578 "send_buf_size": 2097152, 00:23:35.578 "enable_recv_pipe": true, 00:23:35.578 "enable_quickack": false, 00:23:35.578 "enable_placement_id": 0, 00:23:35.578 "enable_zerocopy_send_server": true, 00:23:35.578 "enable_zerocopy_send_client": false, 00:23:35.578 "zerocopy_threshold": 0, 00:23:35.578 "tls_version": 0, 00:23:35.578 "enable_ktls": false 00:23:35.578 } 00:23:35.578 } 00:23:35.578 ] 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "subsystem": "vmd", 00:23:35.578 "config": [] 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "subsystem": "accel", 00:23:35.578 "config": [ 00:23:35.578 { 00:23:35.578 "method": "accel_set_options", 00:23:35.578 "params": { 00:23:35.578 "small_cache_size": 128, 00:23:35.578 "large_cache_size": 16, 00:23:35.578 "task_count": 2048, 00:23:35.578 "sequence_count": 2048, 00:23:35.578 "buf_count": 2048 00:23:35.578 } 00:23:35.578 } 00:23:35.578 ] 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "subsystem": "bdev", 00:23:35.578 "config": [ 00:23:35.578 { 00:23:35.578 "method": "bdev_set_options", 00:23:35.578 "params": { 00:23:35.578 "bdev_io_pool_size": 65535, 00:23:35.578 "bdev_io_cache_size": 256, 00:23:35.578 "bdev_auto_examine": true, 00:23:35.578 "iobuf_small_cache_size": 128, 00:23:35.578 "iobuf_large_cache_size": 16 00:23:35.578 } 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "method": "bdev_raid_set_options", 00:23:35.578 "params": { 00:23:35.578 "process_window_size_kb": 1024 00:23:35.578 } 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "method": "bdev_iscsi_set_options", 00:23:35.578 "params": { 00:23:35.578 "timeout_sec": 30 00:23:35.578 } 00:23:35.578 }, 00:23:35.578 { 00:23:35.578 "method": "bdev_nvme_set_options", 00:23:35.578 "params": { 00:23:35.578 "action_on_timeout": "none", 00:23:35.578 "timeout_us": 0, 00:23:35.579 "timeout_admin_us": 0, 00:23:35.579 "keep_alive_timeout_ms": 10000, 00:23:35.579 "arbitration_burst": 0, 00:23:35.579 "low_priority_weight": 0, 00:23:35.579 "medium_priority_weight": 0, 00:23:35.579 "high_priority_weight": 0, 00:23:35.579 "nvme_adminq_poll_period_us": 10000, 00:23:35.579 "nvme_ioq_poll_period_us": 0, 00:23:35.579 "io_queue_requests": 0, 00:23:35.579 "delay_cmd_submit": true, 00:23:35.579 "transport_retry_count": 4, 00:23:35.579 "bdev_retry_count": 3, 00:23:35.579 "transport_ack_timeout": 0, 00:23:35.579 "ctrlr_loss_timeout_sec": 0, 00:23:35.579 "reconnect_delay_sec": 0, 00:23:35.579 "fast_io_fail_timeout_sec": 0, 00:23:35.579 "disable_auto_failback": false, 00:23:35.579 "generate_uuids": false, 00:23:35.579 "transport_tos": 0, 00:23:35.579 "nvme_error_stat": false, 00:23:35.579 "rdma_srq_size": 0, 00:23:35.579 "io_path_stat": false, 00:23:35.579 "allow_accel_sequence": false, 00:23:35.579 "rdma_max_cq_size": 0, 00:23:35.579 "rdma_cm_event_timeout_ms": 0, 00:23:35.579 "dhchap_digests": [ 00:23:35.579 "sha256", 00:23:35.579 "sha384", 00:23:35.579 "sha512" 00:23:35.579 ], 00:23:35.579 "dhchap_dhgroups": [ 00:23:35.579 "null", 00:23:35.579 "ffdhe2048", 00:23:35.579 "ffdhe3072", 00:23:35.579 "ffdhe4096", 00:23:35.579 "ffdhe6144", 00:23:35.579 "ffdhe8192" 00:23:35.579 ] 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "bdev_nvme_set_hotplug", 00:23:35.579 "params": { 00:23:35.579 "period_us": 100000, 00:23:35.579 "enable": false 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "bdev_malloc_create", 00:23:35.579 "params": { 00:23:35.579 "name": "malloc0", 00:23:35.579 "num_blocks": 8192, 00:23:35.579 "block_size": 4096, 00:23:35.579 "physical_block_size": 4096, 00:23:35.579 "uuid": "b6df1040-455c-4cb9-a2ee-a81e060ae9a7", 00:23:35.579 "optimal_io_boundary": 0 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "bdev_wait_for_examine" 00:23:35.579 } 00:23:35.579 ] 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "subsystem": "nbd", 00:23:35.579 "config": [] 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "subsystem": "scheduler", 00:23:35.579 "config": [ 00:23:35.579 { 00:23:35.579 "method": "framework_set_scheduler", 00:23:35.579 "params": { 00:23:35.579 "name": "static" 00:23:35.579 } 00:23:35.579 } 00:23:35.579 ] 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "subsystem": "nvmf", 00:23:35.579 "config": [ 00:23:35.579 { 00:23:35.579 "method": "nvmf_set_config", 00:23:35.579 "params": { 00:23:35.579 "discovery_filter": "match_any", 00:23:35.579 "admin_cmd_passthru": { 00:23:35.579 "identify_ctrlr": false 00:23:35.579 } 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_set_max_subsystems", 00:23:35.579 "params": { 00:23:35.579 "max_subsystems": 1024 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_set_crdt", 00:23:35.579 "params": { 00:23:35.579 "crdt1": 0, 00:23:35.579 "crdt2": 0, 00:23:35.579 "crdt3": 0 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_create_transport", 00:23:35.579 "params": { 00:23:35.579 "trtype": "TCP", 00:23:35.579 "max_queue_depth": 128, 00:23:35.579 "max_io_qpairs_per_ctrlr": 127, 00:23:35.579 "in_capsule_data_size": 4096, 00:23:35.579 "max_io_size": 131072, 00:23:35.579 "io_unit_size": 131072, 00:23:35.579 "max_aq_depth": 128, 00:23:35.579 "num_shared_buffers": 511, 00:23:35.579 "buf_cache_size": 4294967295, 00:23:35.579 "dif_insert_or_strip": false, 00:23:35.579 "zcopy": false, 00:23:35.579 "c2h_success": false, 00:23:35.579 "sock_priority": 0, 00:23:35.579 "abort_timeout_sec": 1, 00:23:35.579 "ack_timeout": 0, 00:23:35.579 "data_wr_pool_size": 0 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_create_subsystem", 00:23:35.579 "params": { 00:23:35.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.579 "allow_any_host": false, 00:23:35.579 "serial_number": "00000000000000000000", 00:23:35.579 "model_number": "SPDK bdev Controller", 00:23:35.579 "max_namespaces": 32, 00:23:35.579 "min_cntlid": 1, 00:23:35.579 "max_cntlid": 65519, 00:23:35.579 "ana_reporting": false 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_subsystem_add_host", 00:23:35.579 "params": { 00:23:35.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.579 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.579 "psk": "key0" 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_subsystem_add_ns", 00:23:35.579 "params": { 00:23:35.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.579 "namespace": { 00:23:35.579 "nsid": 1, 00:23:35.579 "bdev_name": "malloc0", 00:23:35.579 "nguid": "B6DF1040455C4CB9A2EEA81E060AE9A7", 00:23:35.579 "uuid": "b6df1040-455c-4cb9-a2ee-a81e060ae9a7", 00:23:35.579 "no_auto_visible": false 00:23:35.579 } 00:23:35.579 } 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "method": "nvmf_subsystem_add_listener", 00:23:35.579 "params": { 00:23:35.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.579 "listen_address": { 00:23:35.579 "trtype": "TCP", 00:23:35.579 "adrfam": "IPv4", 00:23:35.579 "traddr": "10.0.0.2", 00:23:35.579 "trsvcid": "4420" 00:23:35.579 }, 00:23:35.579 "secure_channel": true 00:23:35.579 } 00:23:35.579 } 00:23:35.579 ] 00:23:35.579 } 00:23:35.579 ] 00:23:35.579 }' 00:23:35.580 19:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:35.838 19:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:35.838 "subsystems": [ 00:23:35.838 { 00:23:35.838 "subsystem": "keyring", 00:23:35.838 "config": [ 00:23:35.838 { 00:23:35.838 "method": "keyring_file_add_key", 00:23:35.838 "params": { 00:23:35.838 "name": "key0", 00:23:35.838 "path": "/tmp/tmp.LQld28tgTU" 00:23:35.838 } 00:23:35.838 } 00:23:35.838 ] 00:23:35.838 }, 00:23:35.838 { 00:23:35.838 "subsystem": "iobuf", 00:23:35.838 "config": [ 00:23:35.838 { 00:23:35.838 "method": "iobuf_set_options", 00:23:35.838 "params": { 00:23:35.838 "small_pool_count": 8192, 00:23:35.838 "large_pool_count": 1024, 00:23:35.838 "small_bufsize": 8192, 00:23:35.838 "large_bufsize": 135168 00:23:35.838 } 00:23:35.838 } 00:23:35.838 ] 00:23:35.838 }, 00:23:35.838 { 00:23:35.838 "subsystem": "sock", 00:23:35.838 "config": [ 00:23:35.838 { 00:23:35.838 "method": "sock_set_default_impl", 00:23:35.838 "params": { 00:23:35.838 "impl_name": "posix" 00:23:35.838 } 00:23:35.838 }, 00:23:35.838 { 00:23:35.839 "method": "sock_impl_set_options", 00:23:35.839 "params": { 00:23:35.839 "impl_name": "ssl", 00:23:35.839 "recv_buf_size": 4096, 00:23:35.839 "send_buf_size": 4096, 00:23:35.839 "enable_recv_pipe": true, 00:23:35.839 "enable_quickack": false, 00:23:35.839 "enable_placement_id": 0, 00:23:35.839 "enable_zerocopy_send_server": true, 00:23:35.839 "enable_zerocopy_send_client": false, 00:23:35.839 "zerocopy_threshold": 0, 00:23:35.839 "tls_version": 0, 00:23:35.839 "enable_ktls": false 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "sock_impl_set_options", 00:23:35.839 "params": { 00:23:35.839 "impl_name": "posix", 00:23:35.839 "recv_buf_size": 2097152, 00:23:35.839 "send_buf_size": 2097152, 00:23:35.839 "enable_recv_pipe": true, 00:23:35.839 "enable_quickack": false, 00:23:35.839 "enable_placement_id": 0, 00:23:35.839 "enable_zerocopy_send_server": true, 00:23:35.839 "enable_zerocopy_send_client": false, 00:23:35.839 "zerocopy_threshold": 0, 00:23:35.839 "tls_version": 0, 00:23:35.839 "enable_ktls": false 00:23:35.839 } 00:23:35.839 } 00:23:35.839 ] 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "subsystem": "vmd", 00:23:35.839 "config": [] 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "subsystem": "accel", 00:23:35.839 "config": [ 00:23:35.839 { 00:23:35.839 "method": "accel_set_options", 00:23:35.839 "params": { 00:23:35.839 "small_cache_size": 128, 00:23:35.839 "large_cache_size": 16, 00:23:35.839 "task_count": 2048, 00:23:35.839 "sequence_count": 2048, 00:23:35.839 "buf_count": 2048 00:23:35.839 } 00:23:35.839 } 00:23:35.839 ] 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "subsystem": "bdev", 00:23:35.839 "config": [ 00:23:35.839 { 00:23:35.839 "method": "bdev_set_options", 00:23:35.839 "params": { 00:23:35.839 "bdev_io_pool_size": 65535, 00:23:35.839 "bdev_io_cache_size": 256, 00:23:35.839 "bdev_auto_examine": true, 00:23:35.839 "iobuf_small_cache_size": 128, 00:23:35.839 "iobuf_large_cache_size": 16 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_raid_set_options", 00:23:35.839 "params": { 00:23:35.839 "process_window_size_kb": 1024 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_iscsi_set_options", 00:23:35.839 "params": { 00:23:35.839 "timeout_sec": 30 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_nvme_set_options", 00:23:35.839 "params": { 00:23:35.839 "action_on_timeout": "none", 00:23:35.839 "timeout_us": 0, 00:23:35.839 "timeout_admin_us": 0, 00:23:35.839 "keep_alive_timeout_ms": 10000, 00:23:35.839 "arbitration_burst": 0, 00:23:35.839 "low_priority_weight": 0, 00:23:35.839 "medium_priority_weight": 0, 00:23:35.839 "high_priority_weight": 0, 00:23:35.839 "nvme_adminq_poll_period_us": 10000, 00:23:35.839 "nvme_ioq_poll_period_us": 0, 00:23:35.839 "io_queue_requests": 512, 00:23:35.839 "delay_cmd_submit": true, 00:23:35.839 "transport_retry_count": 4, 00:23:35.839 "bdev_retry_count": 3, 00:23:35.839 "transport_ack_timeout": 0, 00:23:35.839 "ctrlr_loss_timeout_sec": 0, 00:23:35.839 "reconnect_delay_sec": 0, 00:23:35.839 "fast_io_fail_timeout_sec": 0, 00:23:35.839 "disable_auto_failback": false, 00:23:35.839 "generate_uuids": false, 00:23:35.839 "transport_tos": 0, 00:23:35.839 "nvme_error_stat": false, 00:23:35.839 "rdma_srq_size": 0, 00:23:35.839 "io_path_stat": false, 00:23:35.839 "allow_accel_sequence": false, 00:23:35.839 "rdma_max_cq_size": 0, 00:23:35.839 "rdma_cm_event_timeout_ms": 0, 00:23:35.839 "dhchap_digests": [ 00:23:35.839 "sha256", 00:23:35.839 "sha384", 00:23:35.839 "sha512" 00:23:35.839 ], 00:23:35.839 "dhchap_dhgroups": [ 00:23:35.839 "null", 00:23:35.839 "ffdhe2048", 00:23:35.839 "ffdhe3072", 00:23:35.839 "ffdhe4096", 00:23:35.839 "ffdhe6144", 00:23:35.839 "ffdhe8192" 00:23:35.839 ] 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_nvme_attach_controller", 00:23:35.839 "params": { 00:23:35.839 "name": "nvme0", 00:23:35.839 "trtype": "TCP", 00:23:35.839 "adrfam": "IPv4", 00:23:35.839 "traddr": "10.0.0.2", 00:23:35.839 "trsvcid": "4420", 00:23:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.839 "prchk_reftag": false, 00:23:35.839 "prchk_guard": false, 00:23:35.839 "ctrlr_loss_timeout_sec": 0, 00:23:35.839 "reconnect_delay_sec": 0, 00:23:35.839 "fast_io_fail_timeout_sec": 0, 00:23:35.839 "psk": "key0", 00:23:35.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.839 "hdgst": false, 00:23:35.839 "ddgst": false 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_nvme_set_hotplug", 00:23:35.839 "params": { 00:23:35.839 "period_us": 100000, 00:23:35.839 "enable": false 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_enable_histogram", 00:23:35.839 "params": { 00:23:35.839 "name": "nvme0n1", 00:23:35.839 "enable": true 00:23:35.839 } 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "method": "bdev_wait_for_examine" 00:23:35.839 } 00:23:35.839 ] 00:23:35.839 }, 00:23:35.839 { 00:23:35.839 "subsystem": "nbd", 00:23:35.839 "config": [] 00:23:35.839 } 00:23:35.839 ] 00:23:35.839 }' 00:23:35.839 19:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4020294 00:23:35.839 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4020294 ']' 00:23:35.839 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4020294 00:23:35.839 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.839 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.839 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4020294 00:23:35.840 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:35.840 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:35.840 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4020294' 00:23:35.840 killing process with pid 4020294 00:23:35.840 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4020294 00:23:35.840 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.840 00:23:35.840 Latency(us) 00:23:35.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.840 =================================================================================================================== 00:23:35.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.840 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4020294 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4020156 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4020156 ']' 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4020156 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4020156 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4020156' 00:23:36.099 killing process with pid 4020156 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4020156 00:23:36.099 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4020156 00:23:36.358 19:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:36.358 19:53:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.358 19:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:36.358 "subsystems": [ 00:23:36.358 { 00:23:36.358 "subsystem": "keyring", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "keyring_file_add_key", 00:23:36.358 "params": { 00:23:36.358 "name": "key0", 00:23:36.358 "path": "/tmp/tmp.LQld28tgTU" 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "iobuf", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "iobuf_set_options", 00:23:36.358 "params": { 00:23:36.358 "small_pool_count": 8192, 00:23:36.358 "large_pool_count": 1024, 00:23:36.358 "small_bufsize": 8192, 00:23:36.358 "large_bufsize": 135168 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "sock", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "sock_set_default_impl", 00:23:36.358 "params": { 00:23:36.358 "impl_name": "posix" 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "sock_impl_set_options", 00:23:36.358 "params": { 00:23:36.358 "impl_name": "ssl", 00:23:36.358 "recv_buf_size": 4096, 00:23:36.358 "send_buf_size": 4096, 00:23:36.358 "enable_recv_pipe": true, 00:23:36.358 "enable_quickack": false, 00:23:36.358 "enable_placement_id": 0, 00:23:36.358 "enable_zerocopy_send_server": true, 00:23:36.358 "enable_zerocopy_send_client": false, 00:23:36.358 "zerocopy_threshold": 0, 00:23:36.358 "tls_version": 0, 00:23:36.358 "enable_ktls": false 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "sock_impl_set_options", 00:23:36.358 "params": { 00:23:36.358 "impl_name": "posix", 00:23:36.358 "recv_buf_size": 2097152, 00:23:36.358 "send_buf_size": 2097152, 00:23:36.358 "enable_recv_pipe": true, 00:23:36.358 "enable_quickack": false, 00:23:36.358 "enable_placement_id": 0, 00:23:36.358 "enable_zerocopy_send_server": true, 00:23:36.358 "enable_zerocopy_send_client": false, 00:23:36.358 "zerocopy_threshold": 0, 00:23:36.358 "tls_version": 0, 00:23:36.358 "enable_ktls": false 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "vmd", 00:23:36.358 "config": [] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "accel", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "accel_set_options", 00:23:36.358 "params": { 00:23:36.358 "small_cache_size": 128, 00:23:36.358 "large_cache_size": 16, 00:23:36.358 "task_count": 2048, 00:23:36.358 "sequence_count": 2048, 00:23:36.358 "buf_count": 2048 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "bdev", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "bdev_set_options", 00:23:36.358 "params": { 00:23:36.358 "bdev_io_pool_size": 65535, 00:23:36.358 "bdev_io_cache_size": 256, 00:23:36.358 "bdev_auto_examine": true, 00:23:36.358 "iobuf_small_cache_size": 128, 00:23:36.358 "iobuf_large_cache_size": 16 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "bdev_raid_set_options", 00:23:36.358 "params": { 00:23:36.358 "process_window_size_kb": 1024 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "bdev_iscsi_set_options", 00:23:36.358 "params": { 00:23:36.358 "timeout_sec": 30 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "bdev_nvme_set_options", 00:23:36.358 "params": { 00:23:36.358 "action_on_timeout": "none", 00:23:36.358 "timeout_us": 0, 00:23:36.358 "timeout_admin_us": 0, 00:23:36.358 "keep_alive_timeout_ms": 10000, 00:23:36.358 "arbitration_burst": 0, 00:23:36.358 "low_priority_weight": 0, 00:23:36.358 "medium_priority_weight": 0, 00:23:36.358 "high_priority_weight": 0, 00:23:36.358 "nvme_adminq_poll_period_us": 10000, 00:23:36.358 "nvme_ioq_poll_period_us": 0, 00:23:36.358 "io_queue_requests": 0, 00:23:36.358 "delay_cmd_submit": true, 00:23:36.358 "transport_retry_count": 4, 00:23:36.358 "bdev_retry_count": 3, 00:23:36.358 "transport_ack_timeout": 0, 00:23:36.358 "ctrlr_loss_timeout_sec": 0, 00:23:36.358 "reconnect_delay_sec": 0, 00:23:36.358 "fast_io_fail_timeout_sec": 0, 00:23:36.358 "disable_auto_failback": false, 00:23:36.358 "generate_uuids": false, 00:23:36.358 "transport_tos": 0, 00:23:36.358 "nvme_error_stat": false, 00:23:36.358 "rdma_srq_size": 0, 00:23:36.358 "io_path_stat": false, 00:23:36.358 "allow_accel_sequence": false, 00:23:36.358 "rdma_max_cq_size": 0, 00:23:36.358 "rdma_cm_event_timeout_ms": 0, 00:23:36.358 "dhchap_digests": [ 00:23:36.358 "sha256", 00:23:36.358 "sha384", 00:23:36.358 "sha512" 00:23:36.358 ], 00:23:36.358 "dhchap_dhgroups": [ 00:23:36.358 "null", 00:23:36.358 "ffdhe2048", 00:23:36.358 "ffdhe3072", 00:23:36.358 "ffdhe4096", 00:23:36.358 "ffdhe6144", 00:23:36.358 "ffdhe8192" 00:23:36.358 ] 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "bdev_nvme_set_hotplug", 00:23:36.358 "params": { 00:23:36.358 "period_us": 100000, 00:23:36.358 "enable": false 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "bdev_malloc_create", 00:23:36.358 "params": { 00:23:36.358 "name": "malloc0", 00:23:36.358 "num_blocks": 8192, 00:23:36.358 "block_size": 4096, 00:23:36.358 "physical_block_size": 4096, 00:23:36.358 "uuid": "b6df1040-455c-4cb9-a2ee-a81e060ae9a7", 00:23:36.358 "optimal_io_boundary": 0 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "bdev_wait_for_examine" 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "nbd", 00:23:36.358 "config": [] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "scheduler", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "framework_set_scheduler", 00:23:36.358 "params": { 00:23:36.358 "name": "static" 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.359 { 00:23:36.359 "subsystem": "nvmf", 00:23:36.359 "config": [ 00:23:36.359 { 00:23:36.359 "method": "nvmf_set_config", 00:23:36.359 "params": { 00:23:36.359 "discovery_filter": "match_any", 00:23:36.359 "admin_cmd_passthru": { 00:23:36.359 "identify_ctrlr": false 00:23:36.359 } 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_set_max_subsystems", 00:23:36.359 "params": { 00:23:36.359 "max_subsystems": 1024 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_set_crdt", 00:23:36.359 "params": { 00:23:36.359 "crdt1": 0, 00:23:36.359 "crdt2": 0, 00:23:36.359 "crdt3": 0 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_create_transport", 00:23:36.359 "params": { 00:23:36.359 "trtype": "TCP", 00:23:36.359 "max_queue_depth": 128, 00:23:36.359 "max_io_qpairs_per_ctrlr": 127, 00:23:36.359 "in_capsule_data_size": 4096, 00:23:36.359 "max_io_size": 131072, 00:23:36.359 "io_unit_size": 131072, 00:23:36.359 "max_aq_depth": 128, 00:23:36.359 "num_shared_buffers": 511, 00:23:36.359 "buf_cache_size": 4294967295, 00:23:36.359 "dif_insert_or_strip": false, 00:23:36.359 "zcopy": false, 00:23:36.359 "c2h_success": false, 00:23:36.359 "sock_priority": 0, 00:23:36.359 "abort_timeout_sec": 1, 00:23:36.359 "ack_timeout": 0, 00:23:36.359 "data_wr_pool_size": 0 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_create_subsystem", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "allow_any_host": false, 00:23:36.359 "serial_number": "00000000000000000000", 00:23:36.359 "model_number": "SPDK bdev Controller", 00:23:36.359 "max_namespaces": 32, 00:23:36.359 "min_cntlid": 1, 00:23:36.359 "max_cntlid": 65519, 00:23:36.359 "ana_reporting": false 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_subsystem_add_host", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.359 "psk": "key0" 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_subsystem_add_ns", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "namespace": { 00:23:36.359 "nsid": 1, 00:23:36.359 "bdev_name": "malloc0", 00:23:36.359 "nguid": "B6DF1040455C4CB9A2EEA81E060AE9A7", 00:23:36.359 "uuid": "b6df1040-455c-4cb9-a2ee-a81e060ae9a7", 00:23:36.359 "no_auto_visible": false 00:23:36.359 } 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_subsystem_add_listener", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "listen_address": { 00:23:36.359 "trtype": "TCP", 00:23:36.359 "adrfam": "IPv4", 00:23:36.359 "traddr": "10.0.0.2", 00:23:36.359 "trsvcid": "4420" 00:23:36.359 }, 00:23:36.359 "secure_channel": true 00:23:36.359 } 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 }' 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4020588 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4020588 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4020588 ']' 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.359 19:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.359 [2024-07-25 19:53:45.670889] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:36.359 [2024-07-25 19:53:45.670965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.359 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.359 [2024-07-25 19:53:45.738255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.618 [2024-07-25 19:53:45.831588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.618 [2024-07-25 19:53:45.831648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.618 [2024-07-25 19:53:45.831664] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.618 [2024-07-25 19:53:45.831678] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.618 [2024-07-25 19:53:45.831690] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.618 [2024-07-25 19:53:45.831783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.877 [2024-07-25 19:53:46.078732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.877 [2024-07-25 19:53:46.110735] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.877 [2024-07-25 19:53:46.129212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4020736 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4020736 /var/tmp/bdevperf.sock 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4020736 ']' 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.446 19:53:46 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:37.446 "subsystems": [ 00:23:37.446 { 00:23:37.446 "subsystem": "keyring", 00:23:37.446 "config": [ 00:23:37.446 { 00:23:37.446 "method": "keyring_file_add_key", 00:23:37.446 "params": { 00:23:37.446 "name": "key0", 00:23:37.446 "path": "/tmp/tmp.LQld28tgTU" 00:23:37.446 } 00:23:37.446 } 00:23:37.446 ] 00:23:37.446 }, 00:23:37.446 { 00:23:37.446 "subsystem": "iobuf", 00:23:37.446 "config": [ 00:23:37.446 { 00:23:37.446 "method": "iobuf_set_options", 00:23:37.446 "params": { 00:23:37.446 "small_pool_count": 8192, 00:23:37.446 "large_pool_count": 1024, 00:23:37.446 "small_bufsize": 8192, 00:23:37.446 "large_bufsize": 135168 00:23:37.446 } 00:23:37.446 } 00:23:37.446 ] 00:23:37.446 }, 00:23:37.446 { 00:23:37.446 "subsystem": "sock", 00:23:37.446 "config": [ 00:23:37.446 { 00:23:37.446 "method": "sock_set_default_impl", 00:23:37.446 "params": { 00:23:37.446 "impl_name": "posix" 00:23:37.446 } 00:23:37.446 }, 00:23:37.446 { 00:23:37.446 "method": "sock_impl_set_options", 00:23:37.446 "params": { 00:23:37.446 "impl_name": "ssl", 00:23:37.446 "recv_buf_size": 4096, 00:23:37.446 "send_buf_size": 4096, 00:23:37.446 "enable_recv_pipe": true, 00:23:37.446 "enable_quickack": false, 00:23:37.446 "enable_placement_id": 0, 00:23:37.446 "enable_zerocopy_send_server": true, 00:23:37.446 "enable_zerocopy_send_client": false, 00:23:37.446 "zerocopy_threshold": 0, 00:23:37.446 "tls_version": 0, 00:23:37.446 "enable_ktls": false 00:23:37.446 } 00:23:37.446 }, 00:23:37.446 { 00:23:37.446 "method": "sock_impl_set_options", 00:23:37.446 "params": { 00:23:37.446 "impl_name": "posix", 00:23:37.446 "recv_buf_size": 2097152, 00:23:37.446 "send_buf_size": 2097152, 00:23:37.446 "enable_recv_pipe": true, 00:23:37.446 "enable_quickack": false, 00:23:37.446 "enable_placement_id": 0, 00:23:37.446 "enable_zerocopy_send_server": true, 00:23:37.446 "enable_zerocopy_send_client": false, 00:23:37.446 "zerocopy_threshold": 0, 00:23:37.446 "tls_version": 0, 00:23:37.446 "enable_ktls": false 00:23:37.446 } 00:23:37.446 } 00:23:37.446 ] 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "subsystem": "vmd", 00:23:37.447 "config": [] 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "subsystem": "accel", 00:23:37.447 "config": [ 00:23:37.447 { 00:23:37.447 "method": "accel_set_options", 00:23:37.447 "params": { 00:23:37.447 "small_cache_size": 128, 00:23:37.447 "large_cache_size": 16, 00:23:37.447 "task_count": 2048, 00:23:37.447 "sequence_count": 2048, 00:23:37.447 "buf_count": 2048 00:23:37.447 } 00:23:37.447 } 00:23:37.447 ] 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "subsystem": "bdev", 00:23:37.447 "config": [ 00:23:37.447 { 00:23:37.447 "method": "bdev_set_options", 00:23:37.447 "params": { 00:23:37.447 "bdev_io_pool_size": 65535, 00:23:37.447 "bdev_io_cache_size": 256, 00:23:37.447 "bdev_auto_examine": true, 00:23:37.447 "iobuf_small_cache_size": 128, 00:23:37.447 "iobuf_large_cache_size": 16 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_raid_set_options", 00:23:37.447 "params": { 00:23:37.447 "process_window_size_kb": 1024 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_iscsi_set_options", 00:23:37.447 "params": { 00:23:37.447 "timeout_sec": 30 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_nvme_set_options", 00:23:37.447 "params": { 00:23:37.447 "action_on_timeout": "none", 00:23:37.447 "timeout_us": 0, 00:23:37.447 "timeout_admin_us": 0, 00:23:37.447 "keep_alive_timeout_ms": 10000, 00:23:37.447 "arbitration_burst": 0, 00:23:37.447 "low_priority_weight": 0, 00:23:37.447 "medium_priority_weight": 0, 00:23:37.447 "high_priority_weight": 0, 00:23:37.447 "nvme_adminq_poll_period_us": 10000, 00:23:37.447 "nvme_ioq_poll_period_us": 0, 00:23:37.447 "io_queue_requests": 512, 00:23:37.447 "delay_cmd_submit": true, 00:23:37.447 "transport_retry_count": 4, 00:23:37.447 "bdev_retry_count": 3, 00:23:37.447 "transport_ack_timeout": 0, 00:23:37.447 "ctrlr_loss_timeout_sec": 0, 00:23:37.447 "reconnect_delay_sec": 0, 00:23:37.447 "fast_io_fail_timeout_sec": 0, 00:23:37.447 "disable_auto_failback": false, 00:23:37.447 "generate_uuids": false, 00:23:37.447 "transport_tos": 0, 00:23:37.447 "nvme_error_stat": false, 00:23:37.447 "rdma_srq_size": 0, 00:23:37.447 "io_path_stat": false, 00:23:37.447 "allow_accel_sequence": false, 00:23:37.447 "rdma_max_cq_size": 0, 00:23:37.447 "rdma_cm_event_timeout_ms": 0, 00:23:37.447 "dhchap_digests": [ 00:23:37.447 "sha256", 00:23:37.447 "sha384", 00:23:37.447 "sha512" 00:23:37.447 ], 00:23:37.447 "dhchap_dhgroups": [ 00:23:37.447 "null", 00:23:37.447 "ffdhe2048", 00:23:37.447 "ffdhe3072", 00:23:37.447 "ffdhe4096", 00:23:37.447 "ffdhe6144", 00:23:37.447 "ffdhe8192" 00:23:37.447 ] 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_nvme_attach_controller", 00:23:37.447 "params": { 00:23:37.447 "name": "nvme0", 00:23:37.447 "trtype": "TCP", 00:23:37.447 "adrfam": "IPv4", 00:23:37.447 "traddr": "10.0.0.2", 00:23:37.447 "trsvcid": "4420", 00:23:37.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.447 "prchk_reftag": false, 00:23:37.447 "prchk_guard": false, 00:23:37.447 "ctrlr_loss_timeout_sec": 0, 00:23:37.447 "reconnect_delay_sec": 0, 00:23:37.447 "fast_io_fail_timeout_sec": 0, 00:23:37.447 "psk": "key0", 00:23:37.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.447 "hdgst": false, 00:23:37.447 "ddgst": false 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_nvme_set_hotplug", 00:23:37.447 "params": { 00:23:37.447 "period_us": 100000, 00:23:37.447 "enable": false 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_enable_histogram", 00:23:37.447 "params": { 00:23:37.447 "name": "nvme0n1", 00:23:37.447 "enable": true 00:23:37.447 } 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "method": "bdev_wait_for_examine" 00:23:37.447 } 00:23:37.447 ] 00:23:37.447 }, 00:23:37.447 { 00:23:37.447 "subsystem": "nbd", 00:23:37.447 "config": [] 00:23:37.447 } 00:23:37.447 ] 00:23:37.447 }' 00:23:37.447 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.447 19:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.448 [2024-07-25 19:53:46.745852] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:37.448 [2024-07-25 19:53:46.745919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020736 ] 00:23:37.448 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.448 [2024-07-25 19:53:46.804232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.707 [2024-07-25 19:53:46.896132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.707 [2024-07-25 19:53:47.077199] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.273 19:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.273 19:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:38.273 19:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.273 19:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:38.531 19:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.531 19:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.788 Running I/O for 1 seconds... 00:23:39.757 00:23:39.757 Latency(us) 00:23:39.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.757 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:39.757 Verification LBA range: start 0x0 length 0x2000 00:23:39.757 nvme0n1 : 1.02 3233.50 12.63 0.00 0.00 39185.39 6844.87 45438.29 00:23:39.757 =================================================================================================================== 00:23:39.757 Total : 3233.50 12.63 0.00 0.00 39185.39 6844.87 45438.29 00:23:39.757 0 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:39.757 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:39.757 nvmf_trace.0 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4020736 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4020736 ']' 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4020736 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4020736 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4020736' 00:23:40.016 killing process with pid 4020736 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4020736 00:23:40.016 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.016 00:23:40.016 Latency(us) 00:23:40.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.016 =================================================================================================================== 00:23:40.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4020736 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.016 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.016 rmmod nvme_tcp 00:23:40.275 rmmod nvme_fabrics 00:23:40.275 rmmod nvme_keyring 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4020588 ']' 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4020588 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4020588 ']' 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4020588 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4020588 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4020588' 00:23:40.275 killing process with pid 4020588 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4020588 00:23:40.275 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4020588 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.535 19:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.443 19:53:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.443 19:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8yvY1aWdUm /tmp/tmp.RqdtqldNgF /tmp/tmp.LQld28tgTU 00:23:42.443 00:23:42.443 real 1m19.171s 00:23:42.443 user 2m0.364s 00:23:42.443 sys 0m27.350s 00:23:42.443 19:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:42.443 19:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.443 ************************************ 00:23:42.443 END TEST nvmf_tls 00:23:42.443 ************************************ 00:23:42.443 19:53:51 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:42.443 19:53:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:42.443 19:53:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:42.443 19:53:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.443 ************************************ 00:23:42.443 START TEST nvmf_fips 00:23:42.443 ************************************ 00:23:42.443 19:53:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:42.703 * Looking for test storage... 00:23:42.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:42.703 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:42.704 19:53:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:42.704 Error setting digest 00:23:42.704 00B251816C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:42.704 00B251816C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:42.704 19:53:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:45.233 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:45.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:45.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:45.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:45.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:23:45.234 00:23:45.234 --- 10.0.0.2 ping statistics --- 00:23:45.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.234 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:23:45.234 00:23:45.234 --- 10.0.0.1 ping statistics --- 00:23:45.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.234 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4023099 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4023099 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4023099 ']' 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:45.234 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.234 [2024-07-25 19:53:54.436009] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:45.234 [2024-07-25 19:53:54.436122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.234 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.235 [2024-07-25 19:53:54.503188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.235 [2024-07-25 19:53:54.589957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.235 [2024-07-25 19:53:54.590017] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.235 [2024-07-25 19:53:54.590030] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.235 [2024-07-25 19:53:54.590041] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.235 [2024-07-25 19:53:54.590050] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.235 [2024-07-25 19:53:54.590099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.493 19:53:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.751 [2024-07-25 19:53:54.973296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.751 [2024-07-25 19:53:54.989297] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.751 [2024-07-25 19:53:54.989528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.751 [2024-07-25 19:53:55.020478] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.751 malloc0 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4023247 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4023247 /var/tmp/bdevperf.sock 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4023247 ']' 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.751 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:45.752 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.752 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:45.752 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.752 [2024-07-25 19:53:55.105368] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:45.752 [2024-07-25 19:53:55.105455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4023247 ] 00:23:45.752 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.752 [2024-07-25 19:53:55.164217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.010 [2024-07-25 19:53:55.247881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.010 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:46.010 19:53:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:46.010 19:53:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:46.268 [2024-07-25 19:53:55.563648] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.268 [2024-07-25 19:53:55.563766] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:46.268 TLSTESTn1 00:23:46.268 19:53:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:46.525 Running I/O for 10 seconds... 00:23:56.486 00:23:56.486 Latency(us) 00:23:56.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.486 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:56.486 Verification LBA range: start 0x0 length 0x2000 00:23:56.486 TLSTESTn1 : 10.02 3460.85 13.52 0.00 0.00 36920.65 9757.58 30680.56 00:23:56.486 =================================================================================================================== 00:23:56.486 Total : 3460.85 13.52 0.00 0.00 36920.65 9757.58 30680.56 00:23:56.486 0 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:56.486 nvmf_trace.0 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4023247 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4023247 ']' 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4023247 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:56.486 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4023247 00:23:56.745 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:56.745 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:56.745 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4023247' 00:23:56.745 killing process with pid 4023247 00:23:56.745 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4023247 00:23:56.745 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.745 00:23:56.745 Latency(us) 00:23:56.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.745 =================================================================================================================== 00:23:56.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.745 [2024-07-25 19:54:05.924560] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:56.745 19:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4023247 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.745 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.745 rmmod nvme_tcp 00:23:56.745 rmmod nvme_fabrics 00:23:57.003 rmmod nvme_keyring 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4023099 ']' 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4023099 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4023099 ']' 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4023099 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4023099 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4023099' 00:23:57.003 killing process with pid 4023099 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4023099 00:23:57.003 [2024-07-25 19:54:06.232895] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:57.003 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4023099 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.262 19:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.164 19:54:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.164 19:54:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.164 00:23:59.164 real 0m16.650s 00:23:59.164 user 0m21.320s 00:23:59.164 sys 0m5.481s 00:23:59.164 19:54:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:59.164 19:54:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.164 ************************************ 00:23:59.164 END TEST nvmf_fips 00:23:59.164 ************************************ 00:23:59.165 19:54:08 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:59.165 19:54:08 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:59.165 19:54:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:59.165 19:54:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:59.165 19:54:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.165 ************************************ 00:23:59.165 START TEST nvmf_fuzz 00:23:59.165 ************************************ 00:23:59.165 19:54:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:59.423 * Looking for test storage... 00:23:59.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.423 19:54:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.323 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:01.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:01.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:01.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:01.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:24:01.324 00:24:01.324 --- 10.0.0.2 ping statistics --- 00:24:01.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.324 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:01.324 00:24:01.324 --- 10.0.0.1 ping statistics --- 00:24:01.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.324 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.324 19:54:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4026991 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4026991 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 4026991 ']' 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.325 19:54:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.583 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:01.583 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:01.583 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.583 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.583 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.841 Malloc0 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:01.841 19:54:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:33.941 Fuzzing completed. Shutting down the fuzz application 00:24:33.941 00:24:33.941 Dumping successful admin opcodes: 00:24:33.941 8, 9, 10, 24, 00:24:33.941 Dumping successful io opcodes: 00:24:33.941 0, 9, 00:24:33.941 NS: 0x200003aeff00 I/O qp, Total commands completed: 473591, total successful commands: 2737, random_seed: 4256688640 00:24:33.941 NS: 0x200003aeff00 admin qp, Total commands completed: 58271, total successful commands: 464, random_seed: 555740672 00:24:33.941 19:54:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:33.941 Fuzzing completed. Shutting down the fuzz application 00:24:33.941 00:24:33.941 Dumping successful admin opcodes: 00:24:33.941 24, 00:24:33.941 Dumping successful io opcodes: 00:24:33.941 00:24:33.941 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 486037422 00:24:33.941 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 486156170 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.941 rmmod nvme_tcp 00:24:33.941 rmmod nvme_fabrics 00:24:33.941 rmmod nvme_keyring 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4026991 ']' 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4026991 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 4026991 ']' 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 4026991 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4026991 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4026991' 00:24:33.941 killing process with pid 4026991 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 4026991 00:24:33.941 19:54:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 4026991 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.941 19:54:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.842 19:54:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.842 19:54:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:35.842 00:24:35.842 real 0m36.617s 00:24:35.842 user 0m50.786s 00:24:35.842 sys 0m15.197s 00:24:35.842 19:54:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:35.842 19:54:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:35.842 ************************************ 00:24:35.842 END TEST nvmf_fuzz 00:24:35.842 ************************************ 00:24:35.842 19:54:45 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:35.842 19:54:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:35.842 19:54:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:35.842 19:54:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.842 ************************************ 00:24:35.842 START TEST nvmf_multiconnection 00:24:35.842 ************************************ 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:35.842 * Looking for test storage... 00:24:35.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.842 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.843 19:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.374 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:24:38.375 00:24:38.375 --- 10.0.0.2 ping statistics --- 00:24:38.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.375 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:38.375 00:24:38.375 --- 10.0.0.1 ping statistics --- 00:24:38.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.375 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4032706 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4032706 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 4032706 ']' 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:38.375 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.375 [2024-07-25 19:54:47.469316] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:24:38.375 [2024-07-25 19:54:47.469411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.375 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.375 [2024-07-25 19:54:47.537903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.375 [2024-07-25 19:54:47.628259] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.375 [2024-07-25 19:54:47.628314] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.375 [2024-07-25 19:54:47.628343] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.375 [2024-07-25 19:54:47.628355] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.376 [2024-07-25 19:54:47.628365] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.376 [2024-07-25 19:54:47.628458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.376 [2024-07-25 19:54:47.628522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.376 [2024-07-25 19:54:47.628590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.376 [2024-07-25 19:54:47.628592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.376 [2024-07-25 19:54:47.780829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.376 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 Malloc1 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 [2024-07-25 19:54:47.838472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 Malloc2 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 Malloc3 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 Malloc4 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 Malloc5 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.635 Malloc6 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.635 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 Malloc7 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 Malloc8 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 Malloc9 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 Malloc10 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 Malloc11 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.895 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:39.153 19:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.153 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:39.153 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:39.153 19:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:39.719 19:54:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:39.719 19:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:39.719 19:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.719 19:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:39.719 19:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.615 19:54:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:42.547 19:54:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:42.547 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:42.547 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.547 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:42.547 19:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.444 19:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:45.007 19:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:45.007 19:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:45.007 19:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.007 19:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:45.007 19:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.903 19:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:47.836 19:54:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:47.836 19:54:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:47.836 19:54:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.836 19:54:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:47.836 19:54:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.734 19:54:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:50.667 19:54:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:50.667 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:50.667 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.667 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:50.667 19:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.560 19:55:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:53.490 19:55:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:53.490 19:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:53.490 19:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.490 19:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:53.490 19:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.417 19:55:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:55.982 19:55:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:55.982 19:55:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.982 19:55:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.982 19:55:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.982 19:55:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.505 19:55:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:59.070 19:55:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:59.070 19:55:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:59.070 19:55:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.070 19:55:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:59.070 19:55:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.596 19:55:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:01.854 19:55:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:01.854 19:55:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.854 19:55:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.854 19:55:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.854 19:55:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.749 19:55:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:05.120 19:55:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:05.120 19:55:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:05.120 19:55:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.120 19:55:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:05.120 19:55:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.017 19:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:07.582 19:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:07.582 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:07.582 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.582 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:07.582 19:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:10.107 19:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:10.107 19:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:10.107 19:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:10.107 19:55:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:10.107 19:55:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.107 19:55:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:10.107 19:55:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:10.107 [global] 00:25:10.107 thread=1 00:25:10.107 invalidate=1 00:25:10.107 rw=read 00:25:10.107 time_based=1 00:25:10.107 runtime=10 00:25:10.107 ioengine=libaio 00:25:10.107 direct=1 00:25:10.107 bs=262144 00:25:10.107 iodepth=64 00:25:10.107 norandommap=1 00:25:10.107 numjobs=1 00:25:10.107 00:25:10.107 [job0] 00:25:10.107 filename=/dev/nvme0n1 00:25:10.107 [job1] 00:25:10.107 filename=/dev/nvme10n1 00:25:10.107 [job2] 00:25:10.107 filename=/dev/nvme1n1 00:25:10.107 [job3] 00:25:10.107 filename=/dev/nvme2n1 00:25:10.107 [job4] 00:25:10.107 filename=/dev/nvme3n1 00:25:10.107 [job5] 00:25:10.107 filename=/dev/nvme4n1 00:25:10.107 [job6] 00:25:10.107 filename=/dev/nvme5n1 00:25:10.107 [job7] 00:25:10.107 filename=/dev/nvme6n1 00:25:10.107 [job8] 00:25:10.107 filename=/dev/nvme7n1 00:25:10.107 [job9] 00:25:10.107 filename=/dev/nvme8n1 00:25:10.107 [job10] 00:25:10.107 filename=/dev/nvme9n1 00:25:10.107 Could not set queue depth (nvme0n1) 00:25:10.107 Could not set queue depth (nvme10n1) 00:25:10.107 Could not set queue depth (nvme1n1) 00:25:10.107 Could not set queue depth (nvme2n1) 00:25:10.107 Could not set queue depth (nvme3n1) 00:25:10.107 Could not set queue depth (nvme4n1) 00:25:10.107 Could not set queue depth (nvme5n1) 00:25:10.107 Could not set queue depth (nvme6n1) 00:25:10.107 Could not set queue depth (nvme7n1) 00:25:10.107 Could not set queue depth (nvme8n1) 00:25:10.107 Could not set queue depth (nvme9n1) 00:25:10.107 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.107 fio-3.35 00:25:10.107 Starting 11 threads 00:25:22.308 00:25:22.308 job0: (groupid=0, jobs=1): err= 0: pid=4036973: Thu Jul 25 19:55:29 2024 00:25:22.308 read: IOPS=705, BW=176MiB/s (185MB/s)(1780MiB/10097msec) 00:25:22.308 slat (usec): min=9, max=87395, avg=905.25, stdev=4189.12 00:25:22.308 clat (usec): min=757, max=252652, avg=89804.25, stdev=53548.20 00:25:22.308 lat (usec): min=780, max=252693, avg=90709.50, stdev=54255.00 00:25:22.308 clat percentiles (msec): 00:25:22.308 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 33], 00:25:22.308 | 30.00th=[ 59], 40.00th=[ 73], 50.00th=[ 91], 60.00th=[ 110], 00:25:22.308 | 70.00th=[ 125], 80.00th=[ 140], 90.00th=[ 161], 95.00th=[ 174], 00:25:22.308 | 99.00th=[ 203], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 253], 00:25:22.308 | 99.99th=[ 253] 00:25:22.308 bw ( KiB/s): min=90624, max=332800, per=9.51%, avg=180633.60, stdev=73363.05, samples=20 00:25:22.308 iops : min= 354, max= 1300, avg=705.60, stdev=286.57, samples=20 00:25:22.308 lat (usec) : 1000=0.15% 00:25:22.308 lat (msec) : 2=0.84%, 4=1.88%, 10=5.35%, 20=6.26%, 50=12.14% 00:25:22.308 lat (msec) : 100=28.59%, 250=44.70%, 500=0.08% 00:25:22.308 cpu : usr=0.35%, sys=1.88%, ctx=1552, majf=0, minf=4097 00:25:22.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:22.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.308 issued rwts: total=7119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.308 job1: (groupid=0, jobs=1): err= 0: pid=4036976: Thu Jul 25 19:55:29 2024 00:25:22.308 read: IOPS=682, BW=171MiB/s (179MB/s)(1724MiB/10098msec) 00:25:22.308 slat (usec): min=13, max=95600, avg=1358.94, stdev=4709.69 00:25:22.308 clat (msec): min=18, max=269, avg=92.30, stdev=41.06 00:25:22.308 lat (msec): min=18, max=277, avg=93.65, stdev=41.70 00:25:22.308 clat percentiles (msec): 00:25:22.308 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 59], 00:25:22.308 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 92], 00:25:22.308 | 70.00th=[ 112], 80.00th=[ 134], 90.00th=[ 157], 95.00th=[ 169], 00:25:22.308 | 99.00th=[ 188], 99.50th=[ 199], 99.90th=[ 222], 99.95th=[ 232], 00:25:22.308 | 99.99th=[ 271] 00:25:22.308 bw ( KiB/s): min=95232, max=317440, per=9.20%, avg=174857.90, stdev=63076.84, samples=20 00:25:22.308 iops : min= 372, max= 1240, avg=683.00, stdev=246.44, samples=20 00:25:22.308 lat (msec) : 20=0.04%, 50=11.92%, 100=52.03%, 250=35.99%, 500=0.01% 00:25:22.308 cpu : usr=0.39%, sys=2.27%, ctx=1268, majf=0, minf=4097 00:25:22.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:22.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.308 issued rwts: total=6894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.308 job2: (groupid=0, jobs=1): err= 0: pid=4036979: Thu Jul 25 19:55:29 2024 00:25:22.308 read: IOPS=1045, BW=261MiB/s (274MB/s)(2617MiB/10011msec) 00:25:22.308 slat (usec): min=9, max=89171, avg=725.85, stdev=3301.84 00:25:22.308 clat (usec): min=800, max=213302, avg=60445.07, stdev=39004.45 00:25:22.308 lat (usec): min=816, max=213321, avg=61170.92, stdev=39512.28 00:25:22.308 clat percentiles (msec): 00:25:22.308 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 29], 00:25:22.308 | 30.00th=[ 32], 40.00th=[ 36], 50.00th=[ 50], 60.00th=[ 64], 00:25:22.308 | 70.00th=[ 78], 80.00th=[ 97], 90.00th=[ 124], 95.00th=[ 136], 00:25:22.308 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 188], 00:25:22.308 | 99.99th=[ 213] 00:25:22.308 bw ( KiB/s): min=133632, max=533504, per=14.02%, avg=266368.00, stdev=98537.25, samples=20 00:25:22.308 iops : min= 522, max= 2084, avg=1040.50, stdev=384.91, samples=20 00:25:22.308 lat (usec) : 1000=0.04% 00:25:22.308 lat (msec) : 2=0.41%, 4=0.29%, 10=1.84%, 20=6.21%, 50=41.84% 00:25:22.308 lat (msec) : 100=31.08%, 250=18.29% 00:25:22.308 cpu : usr=0.61%, sys=2.83%, ctx=1919, majf=0, minf=4097 00:25:22.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:22.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.308 issued rwts: total=10468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.308 job3: (groupid=0, jobs=1): err= 0: pid=4036980: Thu Jul 25 19:55:29 2024 00:25:22.308 read: IOPS=554, BW=139MiB/s (145MB/s)(1400MiB/10094msec) 00:25:22.308 slat (usec): min=9, max=117030, avg=1326.03, stdev=4904.43 00:25:22.308 clat (usec): min=1220, max=230956, avg=114002.69, stdev=38950.59 00:25:22.308 lat (usec): min=1261, max=250489, avg=115328.72, stdev=39712.64 00:25:22.308 clat percentiles (msec): 00:25:22.308 | 1.00th=[ 4], 5.00th=[ 26], 10.00th=[ 75], 20.00th=[ 90], 00:25:22.308 | 30.00th=[ 99], 40.00th=[ 108], 50.00th=[ 116], 60.00th=[ 123], 00:25:22.309 | 70.00th=[ 136], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 171], 00:25:22.309 | 99.00th=[ 188], 99.50th=[ 201], 99.90th=[ 226], 99.95th=[ 226], 00:25:22.309 | 99.99th=[ 232] 00:25:22.309 bw ( KiB/s): min=94720, max=309760, per=7.46%, avg=141707.65, stdev=45851.52, samples=20 00:25:22.309 iops : min= 370, max= 1210, avg=553.50, stdev=179.13, samples=20 00:25:22.309 lat (msec) : 2=0.14%, 4=1.34%, 10=1.32%, 20=1.21%, 50=3.16% 00:25:22.309 lat (msec) : 100=24.62%, 250=68.20% 00:25:22.309 cpu : usr=0.24%, sys=1.76%, ctx=1223, majf=0, minf=4097 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=5598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job4: (groupid=0, jobs=1): err= 0: pid=4036981: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=553, BW=138MiB/s (145MB/s)(1399MiB/10107msec) 00:25:22.309 slat (usec): min=9, max=53211, avg=1246.96, stdev=4325.10 00:25:22.309 clat (usec): min=1781, max=249776, avg=114239.34, stdev=37357.20 00:25:22.309 lat (usec): min=1799, max=249794, avg=115486.30, stdev=37965.50 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 69], 20.00th=[ 87], 00:25:22.309 | 30.00th=[ 99], 40.00th=[ 110], 50.00th=[ 118], 60.00th=[ 126], 00:25:22.309 | 70.00th=[ 134], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 169], 00:25:22.309 | 99.00th=[ 190], 99.50th=[ 199], 99.90th=[ 228], 99.95th=[ 232], 00:25:22.309 | 99.99th=[ 251] 00:25:22.309 bw ( KiB/s): min=95744, max=187904, per=7.46%, avg=141670.40, stdev=26699.09, samples=20 00:25:22.309 iops : min= 374, max= 734, avg=553.40, stdev=104.29, samples=20 00:25:22.309 lat (msec) : 2=0.02%, 4=0.64%, 10=0.50%, 20=0.79%, 50=4.75% 00:25:22.309 lat (msec) : 100=24.51%, 250=68.79% 00:25:22.309 cpu : usr=0.39%, sys=1.67%, ctx=1313, majf=0, minf=4097 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=5597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job5: (groupid=0, jobs=1): err= 0: pid=4036982: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=665, BW=166MiB/s (174MB/s)(1681MiB/10109msec) 00:25:22.309 slat (usec): min=13, max=71190, avg=1383.05, stdev=4277.09 00:25:22.309 clat (msec): min=4, max=248, avg=94.75, stdev=39.68 00:25:22.309 lat (msec): min=4, max=251, avg=96.13, stdev=40.33 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 37], 00:25:22.309 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 109], 00:25:22.309 | 70.00th=[ 117], 80.00th=[ 127], 90.00th=[ 140], 95.00th=[ 155], 00:25:22.309 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 234], 99.95th=[ 249], 00:25:22.309 | 99.99th=[ 249] 00:25:22.309 bw ( KiB/s): min=107008, max=414208, per=8.98%, avg=170521.60, stdev=78652.91, samples=20 00:25:22.309 iops : min= 418, max= 1618, avg=666.10, stdev=307.24, samples=20 00:25:22.309 lat (msec) : 10=0.39%, 20=0.77%, 50=19.63%, 100=29.44%, 250=49.77% 00:25:22.309 cpu : usr=0.36%, sys=2.20%, ctx=1358, majf=0, minf=4097 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=6725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job6: (groupid=0, jobs=1): err= 0: pid=4036983: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=533, BW=133MiB/s (140MB/s)(1347MiB/10097msec) 00:25:22.309 slat (usec): min=13, max=77570, avg=1590.97, stdev=4875.22 00:25:22.309 clat (msec): min=5, max=258, avg=118.28, stdev=35.16 00:25:22.309 lat (msec): min=5, max=258, avg=119.87, stdev=35.89 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 13], 5.00th=[ 65], 10.00th=[ 81], 20.00th=[ 94], 00:25:22.309 | 30.00th=[ 103], 40.00th=[ 113], 50.00th=[ 122], 60.00th=[ 129], 00:25:22.309 | 70.00th=[ 136], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 171], 00:25:22.309 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 241], 00:25:22.309 | 99.99th=[ 259] 00:25:22.309 bw ( KiB/s): min=95744, max=215552, per=7.17%, avg=136307.80, stdev=27697.62, samples=20 00:25:22.309 iops : min= 374, max= 842, avg=532.45, stdev=108.19, samples=20 00:25:22.309 lat (msec) : 10=0.46%, 20=2.41%, 50=1.49%, 100=22.93%, 250=72.69% 00:25:22.309 lat (msec) : 500=0.02% 00:25:22.309 cpu : usr=0.33%, sys=1.85%, ctx=1169, majf=0, minf=4097 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=5387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job7: (groupid=0, jobs=1): err= 0: pid=4036984: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=607, BW=152MiB/s (159MB/s)(1535MiB/10107msec) 00:25:22.309 slat (usec): min=8, max=116570, avg=1223.78, stdev=4878.82 00:25:22.309 clat (usec): min=983, max=267660, avg=104066.91, stdev=43366.82 00:25:22.309 lat (usec): min=1002, max=267835, avg=105290.68, stdev=44043.74 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 31], 20.00th=[ 72], 00:25:22.309 | 30.00th=[ 85], 40.00th=[ 99], 50.00th=[ 111], 60.00th=[ 121], 00:25:22.309 | 70.00th=[ 130], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 167], 00:25:22.309 | 99.00th=[ 188], 99.50th=[ 209], 99.90th=[ 243], 99.95th=[ 243], 00:25:22.309 | 99.99th=[ 268] 00:25:22.309 bw ( KiB/s): min=84992, max=357888, per=8.18%, avg=155494.40, stdev=62024.47, samples=20 00:25:22.309 iops : min= 332, max= 1398, avg=607.40, stdev=242.28, samples=20 00:25:22.309 lat (usec) : 1000=0.02% 00:25:22.309 lat (msec) : 2=0.10%, 4=0.51%, 10=0.80%, 20=2.69%, 50=9.30% 00:25:22.309 lat (msec) : 100=28.20%, 250=58.34%, 500=0.05% 00:25:22.309 cpu : usr=0.33%, sys=1.99%, ctx=1333, majf=0, minf=4097 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=6138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job8: (groupid=0, jobs=1): err= 0: pid=4036987: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=717, BW=179MiB/s (188MB/s)(1813MiB/10109msec) 00:25:22.309 slat (usec): min=13, max=111355, avg=1277.79, stdev=4249.62 00:25:22.309 clat (msec): min=4, max=262, avg=87.87, stdev=34.80 00:25:22.309 lat (msec): min=4, max=262, avg=89.15, stdev=35.26 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 16], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 58], 00:25:22.309 | 30.00th=[ 66], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 91], 00:25:22.309 | 70.00th=[ 105], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 153], 00:25:22.309 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 245], 99.95th=[ 259], 00:25:22.309 | 99.99th=[ 264] 00:25:22.309 bw ( KiB/s): min=103118, max=295936, per=9.69%, avg=184048.70, stdev=57463.17, samples=20 00:25:22.309 iops : min= 402, max= 1156, avg=718.90, stdev=224.53, samples=20 00:25:22.309 lat (msec) : 10=0.69%, 20=0.58%, 50=7.60%, 100=57.69%, 250=33.37% 00:25:22.309 lat (msec) : 500=0.07% 00:25:22.309 cpu : usr=0.41%, sys=2.39%, ctx=1294, majf=0, minf=4097 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=7252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job9: (groupid=0, jobs=1): err= 0: pid=4036988: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=616, BW=154MiB/s (162MB/s)(1557MiB/10106msec) 00:25:22.309 slat (usec): min=13, max=55751, avg=1542.97, stdev=4356.01 00:25:22.309 clat (msec): min=5, max=246, avg=102.25, stdev=35.94 00:25:22.309 lat (msec): min=5, max=246, avg=103.79, stdev=36.57 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 59], 20.00th=[ 71], 00:25:22.309 | 30.00th=[ 82], 40.00th=[ 94], 50.00th=[ 107], 60.00th=[ 116], 00:25:22.309 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 146], 95.00th=[ 157], 00:25:22.309 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 232], 99.95th=[ 232], 00:25:22.309 | 99.99th=[ 247] 00:25:22.309 bw ( KiB/s): min=102400, max=348672, per=8.30%, avg=157772.80, stdev=56537.43, samples=20 00:25:22.309 iops : min= 400, max= 1362, avg=616.30, stdev=220.85, samples=20 00:25:22.309 lat (msec) : 10=0.66%, 20=1.38%, 50=5.96%, 100=36.12%, 250=55.89% 00:25:22.309 cpu : usr=0.42%, sys=2.09%, ctx=1177, majf=0, minf=3721 00:25:22.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:22.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.309 issued rwts: total=6227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.309 job10: (groupid=0, jobs=1): err= 0: pid=4036989: Thu Jul 25 19:55:29 2024 00:25:22.309 read: IOPS=777, BW=194MiB/s (204MB/s)(1971MiB/10145msec) 00:25:22.309 slat (usec): min=9, max=81728, avg=958.14, stdev=3582.83 00:25:22.309 clat (msec): min=10, max=307, avg=81.32, stdev=41.04 00:25:22.309 lat (msec): min=11, max=307, avg=82.28, stdev=41.45 00:25:22.309 clat percentiles (msec): 00:25:22.309 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 42], 00:25:22.309 | 30.00th=[ 54], 40.00th=[ 65], 50.00th=[ 81], 60.00th=[ 91], 00:25:22.309 | 70.00th=[ 103], 80.00th=[ 115], 90.00th=[ 136], 95.00th=[ 153], 00:25:22.310 | 99.00th=[ 176], 99.50th=[ 222], 99.90th=[ 288], 99.95th=[ 309], 00:25:22.310 | 99.99th=[ 309] 00:25:22.310 bw ( KiB/s): min=101376, max=420352, per=10.54%, avg=200243.20, stdev=87399.39, samples=20 00:25:22.310 iops : min= 396, max= 1642, avg=782.20, stdev=341.40, samples=20 00:25:22.310 lat (msec) : 20=0.96%, 50=26.29%, 100=40.51%, 250=31.91%, 500=0.33% 00:25:22.310 cpu : usr=0.35%, sys=2.31%, ctx=1433, majf=0, minf=4097 00:25:22.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:22.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.310 issued rwts: total=7885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.310 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.310 00:25:22.310 Run status group 0 (all jobs): 00:25:22.310 READ: bw=1855MiB/s (1945MB/s), 133MiB/s-261MiB/s (140MB/s-274MB/s), io=18.4GiB (19.7GB), run=10011-10145msec 00:25:22.310 00:25:22.310 Disk stats (read/write): 00:25:22.310 nvme0n1: ios=14047/0, merge=0/0, ticks=1237113/0, in_queue=1237113, util=96.97% 00:25:22.310 nvme10n1: ios=13573/0, merge=0/0, ticks=1233345/0, in_queue=1233345, util=97.19% 00:25:22.310 nvme1n1: ios=20430/0, merge=0/0, ticks=1242251/0, in_queue=1242251, util=97.50% 00:25:22.310 nvme2n1: ios=10992/0, merge=0/0, ticks=1234948/0, in_queue=1234948, util=97.66% 00:25:22.310 nvme3n1: ios=10978/0, merge=0/0, ticks=1234146/0, in_queue=1234146, util=97.76% 00:25:22.310 nvme4n1: ios=13235/0, merge=0/0, ticks=1230602/0, in_queue=1230602, util=98.13% 00:25:22.310 nvme5n1: ios=10569/0, merge=0/0, ticks=1232173/0, in_queue=1232173, util=98.29% 00:25:22.310 nvme6n1: ios=12061/0, merge=0/0, ticks=1230928/0, in_queue=1230928, util=98.42% 00:25:22.310 nvme7n1: ios=14298/0, merge=0/0, ticks=1230967/0, in_queue=1230967, util=98.89% 00:25:22.310 nvme8n1: ios=12249/0, merge=0/0, ticks=1229068/0, in_queue=1229068, util=99.09% 00:25:22.310 nvme9n1: ios=15589/0, merge=0/0, ticks=1234900/0, in_queue=1234900, util=99.23% 00:25:22.310 19:55:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:22.310 [global] 00:25:22.310 thread=1 00:25:22.310 invalidate=1 00:25:22.310 rw=randwrite 00:25:22.310 time_based=1 00:25:22.310 runtime=10 00:25:22.310 ioengine=libaio 00:25:22.310 direct=1 00:25:22.310 bs=262144 00:25:22.310 iodepth=64 00:25:22.310 norandommap=1 00:25:22.310 numjobs=1 00:25:22.310 00:25:22.310 [job0] 00:25:22.310 filename=/dev/nvme0n1 00:25:22.310 [job1] 00:25:22.310 filename=/dev/nvme10n1 00:25:22.310 [job2] 00:25:22.310 filename=/dev/nvme1n1 00:25:22.310 [job3] 00:25:22.310 filename=/dev/nvme2n1 00:25:22.310 [job4] 00:25:22.310 filename=/dev/nvme3n1 00:25:22.310 [job5] 00:25:22.310 filename=/dev/nvme4n1 00:25:22.310 [job6] 00:25:22.310 filename=/dev/nvme5n1 00:25:22.310 [job7] 00:25:22.310 filename=/dev/nvme6n1 00:25:22.310 [job8] 00:25:22.310 filename=/dev/nvme7n1 00:25:22.310 [job9] 00:25:22.310 filename=/dev/nvme8n1 00:25:22.310 [job10] 00:25:22.310 filename=/dev/nvme9n1 00:25:22.310 Could not set queue depth (nvme0n1) 00:25:22.310 Could not set queue depth (nvme10n1) 00:25:22.310 Could not set queue depth (nvme1n1) 00:25:22.310 Could not set queue depth (nvme2n1) 00:25:22.310 Could not set queue depth (nvme3n1) 00:25:22.310 Could not set queue depth (nvme4n1) 00:25:22.310 Could not set queue depth (nvme5n1) 00:25:22.310 Could not set queue depth (nvme6n1) 00:25:22.310 Could not set queue depth (nvme7n1) 00:25:22.310 Could not set queue depth (nvme8n1) 00:25:22.310 Could not set queue depth (nvme9n1) 00:25:22.310 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.310 fio-3.35 00:25:22.310 Starting 11 threads 00:25:32.328 00:25:32.328 job0: (groupid=0, jobs=1): err= 0: pid=4038012: Thu Jul 25 19:55:40 2024 00:25:32.328 write: IOPS=493, BW=123MiB/s (129MB/s)(1250MiB/10131msec); 0 zone resets 00:25:32.328 slat (usec): min=15, max=70893, avg=1185.36, stdev=3558.05 00:25:32.328 clat (usec): min=848, max=296597, avg=128351.20, stdev=62167.17 00:25:32.329 lat (usec): min=888, max=296632, avg=129536.56, stdev=62973.47 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 43], 20.00th=[ 77], 00:25:32.329 | 30.00th=[ 104], 40.00th=[ 113], 50.00th=[ 122], 60.00th=[ 136], 00:25:32.329 | 70.00th=[ 163], 80.00th=[ 186], 90.00th=[ 211], 95.00th=[ 236], 00:25:32.329 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 288], 00:25:32.329 | 99.99th=[ 296] 00:25:32.329 bw ( KiB/s): min=63488, max=196608, per=9.21%, avg=126416.05, stdev=31640.23, samples=20 00:25:32.329 iops : min= 248, max= 768, avg=493.75, stdev=123.57, samples=20 00:25:32.329 lat (usec) : 1000=0.08% 00:25:32.329 lat (msec) : 2=0.26%, 4=0.52%, 10=1.80%, 20=2.10%, 50=7.36% 00:25:32.329 lat (msec) : 100=16.46%, 250=67.89%, 500=3.54% 00:25:32.329 cpu : usr=1.42%, sys=1.65%, ctx=3093, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,5001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job1: (groupid=0, jobs=1): err= 0: pid=4038013: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=406, BW=102MiB/s (107MB/s)(1031MiB/10141msec); 0 zone resets 00:25:32.329 slat (usec): min=22, max=34645, avg=2135.07, stdev=4386.21 00:25:32.329 clat (usec): min=1333, max=325156, avg=155172.99, stdev=56015.06 00:25:32.329 lat (usec): min=1921, max=325241, avg=157308.07, stdev=56636.06 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 10], 5.00th=[ 62], 10.00th=[ 85], 20.00th=[ 113], 00:25:32.329 | 30.00th=[ 129], 40.00th=[ 146], 50.00th=[ 155], 60.00th=[ 167], 00:25:32.329 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 226], 95.00th=[ 251], 00:25:32.329 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 326], 00:25:32.329 | 99.99th=[ 326] 00:25:32.329 bw ( KiB/s): min=63361, max=159744, per=7.58%, avg=103964.45, stdev=25808.74, samples=20 00:25:32.329 iops : min= 247, max= 624, avg=406.05, stdev=100.87, samples=20 00:25:32.329 lat (msec) : 2=0.05%, 4=0.05%, 10=0.92%, 20=0.56%, 50=2.45% 00:25:32.329 lat (msec) : 100=8.05%, 250=82.90%, 500=5.02% 00:25:32.329 cpu : usr=1.17%, sys=1.30%, ctx=1549, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,4124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job2: (groupid=0, jobs=1): err= 0: pid=4038014: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=502, BW=126MiB/s (132MB/s)(1273MiB/10135msec); 0 zone resets 00:25:32.329 slat (usec): min=16, max=132083, avg=999.70, stdev=4494.31 00:25:32.329 clat (usec): min=800, max=358241, avg=126358.18, stdev=76417.28 00:25:32.329 lat (usec): min=831, max=358325, avg=127357.89, stdev=77100.17 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 17], 20.00th=[ 49], 00:25:32.329 | 30.00th=[ 77], 40.00th=[ 106], 50.00th=[ 133], 60.00th=[ 150], 00:25:32.329 | 70.00th=[ 165], 80.00th=[ 194], 90.00th=[ 230], 95.00th=[ 257], 00:25:32.329 | 99.00th=[ 284], 99.50th=[ 309], 99.90th=[ 338], 99.95th=[ 347], 00:25:32.329 | 99.99th=[ 359] 00:25:32.329 bw ( KiB/s): min=57344, max=235520, per=9.38%, avg=128669.85, stdev=38921.00, samples=20 00:25:32.329 iops : min= 224, max= 920, avg=502.60, stdev=152.05, samples=20 00:25:32.329 lat (usec) : 1000=0.24% 00:25:32.329 lat (msec) : 2=0.73%, 4=1.53%, 10=3.61%, 20=5.25%, 50=9.16% 00:25:32.329 lat (msec) : 100=18.21%, 250=55.56%, 500=5.72% 00:25:32.329 cpu : usr=1.50%, sys=1.77%, ctx=3568, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,5090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job3: (groupid=0, jobs=1): err= 0: pid=4038026: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=471, BW=118MiB/s (124MB/s)(1191MiB/10109msec); 0 zone resets 00:25:32.329 slat (usec): min=20, max=72890, avg=1898.16, stdev=4329.86 00:25:32.329 clat (msec): min=2, max=292, avg=133.83, stdev=65.32 00:25:32.329 lat (msec): min=2, max=292, avg=135.72, stdev=66.26 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 45], 20.00th=[ 58], 00:25:32.329 | 30.00th=[ 95], 40.00th=[ 116], 50.00th=[ 140], 60.00th=[ 159], 00:25:32.329 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 234], 00:25:32.329 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 292], 00:25:32.329 | 99.99th=[ 292] 00:25:32.329 bw ( KiB/s): min=63488, max=306176, per=8.77%, avg=120362.00, stdev=59366.62, samples=20 00:25:32.329 iops : min= 248, max= 1196, avg=470.15, stdev=231.91, samples=20 00:25:32.329 lat (msec) : 4=0.10%, 10=0.46%, 20=2.10%, 50=12.21%, 100=20.02% 00:25:32.329 lat (msec) : 250=62.90%, 500=2.20% 00:25:32.329 cpu : usr=1.41%, sys=1.52%, ctx=1863, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,4765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job4: (groupid=0, jobs=1): err= 0: pid=4038027: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=537, BW=134MiB/s (141MB/s)(1359MiB/10104msec); 0 zone resets 00:25:32.329 slat (usec): min=16, max=45443, avg=1216.46, stdev=3603.72 00:25:32.329 clat (usec): min=843, max=279331, avg=117696.60, stdev=69681.24 00:25:32.329 lat (usec): min=885, max=279396, avg=118913.06, stdev=70467.01 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 56], 00:25:32.329 | 30.00th=[ 73], 40.00th=[ 89], 50.00th=[ 105], 60.00th=[ 126], 00:25:32.329 | 70.00th=[ 148], 80.00th=[ 190], 90.00th=[ 222], 95.00th=[ 249], 00:25:32.329 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:25:32.329 | 99.99th=[ 279] 00:25:32.329 bw ( KiB/s): min=67584, max=236032, per=10.02%, avg=137494.35, stdev=41248.50, samples=20 00:25:32.329 iops : min= 264, max= 922, avg=537.05, stdev=161.10, samples=20 00:25:32.329 lat (usec) : 1000=0.06% 00:25:32.329 lat (msec) : 2=0.37%, 4=0.88%, 10=2.26%, 20=2.61%, 50=11.35% 00:25:32.329 lat (msec) : 100=30.60%, 250=47.25%, 500=4.62% 00:25:32.329 cpu : usr=1.65%, sys=1.87%, ctx=3217, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,5435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job5: (groupid=0, jobs=1): err= 0: pid=4038028: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=507, BW=127MiB/s (133MB/s)(1283MiB/10109msec); 0 zone resets 00:25:32.329 slat (usec): min=18, max=62289, avg=1554.10, stdev=4000.71 00:25:32.329 clat (usec): min=1149, max=296961, avg=124431.62, stdev=65819.55 00:25:32.329 lat (usec): min=1192, max=297424, avg=125985.72, stdev=66744.37 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 59], 00:25:32.329 | 30.00th=[ 90], 40.00th=[ 104], 50.00th=[ 126], 60.00th=[ 142], 00:25:32.329 | 70.00th=[ 161], 80.00th=[ 186], 90.00th=[ 213], 95.00th=[ 234], 00:25:32.329 | 99.00th=[ 268], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:25:32.329 | 99.99th=[ 296] 00:25:32.329 bw ( KiB/s): min=71680, max=276992, per=9.46%, avg=129795.00, stdev=50232.57, samples=20 00:25:32.329 iops : min= 280, max= 1082, avg=506.95, stdev=196.25, samples=20 00:25:32.329 lat (msec) : 2=0.10%, 4=0.47%, 10=1.71%, 20=1.89%, 50=13.60% 00:25:32.329 lat (msec) : 100=20.01%, 250=59.46%, 500=2.77% 00:25:32.329 cpu : usr=1.53%, sys=1.76%, ctx=2467, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,5133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job6: (groupid=0, jobs=1): err= 0: pid=4038029: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=522, BW=131MiB/s (137MB/s)(1324MiB/10143msec); 0 zone resets 00:25:32.329 slat (usec): min=17, max=52143, avg=1085.33, stdev=3461.46 00:25:32.329 clat (usec): min=872, max=356381, avg=121357.32, stdev=70698.61 00:25:32.329 lat (usec): min=896, max=356447, avg=122442.65, stdev=71504.99 00:25:32.329 clat percentiles (usec): 00:25:32.329 | 1.00th=[ 1926], 5.00th=[ 9896], 10.00th=[ 21890], 20.00th=[ 47449], 00:25:32.329 | 30.00th=[ 78119], 40.00th=[106431], 50.00th=[121111], 60.00th=[141558], 00:25:32.329 | 70.00th=[162530], 80.00th=[187696], 90.00th=[214959], 95.00th=[233833], 00:25:32.329 | 99.00th=[274727], 99.50th=[278922], 99.90th=[341836], 99.95th=[341836], 00:25:32.329 | 99.99th=[354419] 00:25:32.329 bw ( KiB/s): min=66560, max=242688, per=9.76%, avg=133983.85, stdev=47667.30, samples=20 00:25:32.329 iops : min= 260, max= 948, avg=523.30, stdev=186.19, samples=20 00:25:32.329 lat (usec) : 1000=0.26% 00:25:32.329 lat (msec) : 2=0.77%, 4=1.08%, 10=2.89%, 20=4.08%, 50=12.23% 00:25:32.329 lat (msec) : 100=15.63%, 250=60.11%, 500=2.95% 00:25:32.329 cpu : usr=1.67%, sys=1.78%, ctx=3582, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.329 issued rwts: total=0,5297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.329 job7: (groupid=0, jobs=1): err= 0: pid=4038030: Thu Jul 25 19:55:40 2024 00:25:32.329 write: IOPS=475, BW=119MiB/s (125MB/s)(1202MiB/10109msec); 0 zone resets 00:25:32.329 slat (usec): min=21, max=46520, avg=1574.74, stdev=4020.07 00:25:32.329 clat (usec): min=1530, max=283503, avg=132867.35, stdev=68824.94 00:25:32.329 lat (usec): min=1573, max=283618, avg=134442.10, stdev=69835.67 00:25:32.329 clat percentiles (msec): 00:25:32.329 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 36], 20.00th=[ 79], 00:25:32.329 | 30.00th=[ 89], 40.00th=[ 109], 50.00th=[ 126], 60.00th=[ 148], 00:25:32.329 | 70.00th=[ 174], 80.00th=[ 199], 90.00th=[ 232], 95.00th=[ 251], 00:25:32.329 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 284], 00:25:32.329 | 99.99th=[ 284] 00:25:32.329 bw ( KiB/s): min=65405, max=211456, per=8.85%, avg=121479.70, stdev=47046.17, samples=20 00:25:32.329 iops : min= 255, max= 826, avg=474.50, stdev=183.80, samples=20 00:25:32.329 lat (msec) : 2=0.04%, 4=0.37%, 10=2.02%, 20=3.10%, 50=6.03% 00:25:32.329 lat (msec) : 100=24.15%, 250=58.99%, 500=5.30% 00:25:32.329 cpu : usr=1.51%, sys=1.68%, ctx=2508, majf=0, minf=1 00:25:32.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:32.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.330 issued rwts: total=0,4808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.330 job8: (groupid=0, jobs=1): err= 0: pid=4038031: Thu Jul 25 19:55:40 2024 00:25:32.330 write: IOPS=501, BW=125MiB/s (131MB/s)(1271MiB/10137msec); 0 zone resets 00:25:32.330 slat (usec): min=19, max=166220, avg=1541.03, stdev=5124.11 00:25:32.330 clat (usec): min=1391, max=385935, avg=125948.59, stdev=71737.57 00:25:32.330 lat (usec): min=1499, max=386035, avg=127489.63, stdev=72591.11 00:25:32.330 clat percentiles (msec): 00:25:32.330 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 74], 00:25:32.330 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 105], 60.00th=[ 128], 00:25:32.330 | 70.00th=[ 155], 80.00th=[ 192], 90.00th=[ 239], 95.00th=[ 271], 00:25:32.330 | 99.00th=[ 296], 99.50th=[ 313], 99.90th=[ 376], 99.95th=[ 376], 00:25:32.330 | 99.99th=[ 388] 00:25:32.330 bw ( KiB/s): min=63488, max=234496, per=9.37%, avg=128538.20, stdev=50470.84, samples=20 00:25:32.330 iops : min= 248, max= 916, avg=502.10, stdev=197.15, samples=20 00:25:32.330 lat (msec) : 2=0.10%, 4=0.35%, 10=0.22%, 20=1.36%, 50=9.50% 00:25:32.330 lat (msec) : 100=35.76%, 250=44.83%, 500=7.89% 00:25:32.330 cpu : usr=1.67%, sys=1.77%, ctx=2298, majf=0, minf=1 00:25:32.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.330 issued rwts: total=0,5084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.330 job9: (groupid=0, jobs=1): err= 0: pid=4038032: Thu Jul 25 19:55:40 2024 00:25:32.330 write: IOPS=448, BW=112MiB/s (118MB/s)(1134MiB/10121msec); 0 zone resets 00:25:32.330 slat (usec): min=18, max=118706, avg=1337.97, stdev=4700.35 00:25:32.330 clat (usec): min=1323, max=335653, avg=141303.80, stdev=80245.75 00:25:32.330 lat (usec): min=1980, max=388850, avg=142641.77, stdev=81355.99 00:25:32.330 clat percentiles (msec): 00:25:32.330 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 62], 00:25:32.330 | 30.00th=[ 87], 40.00th=[ 120], 50.00th=[ 140], 60.00th=[ 153], 00:25:32.330 | 70.00th=[ 182], 80.00th=[ 222], 90.00th=[ 262], 95.00th=[ 284], 00:25:32.330 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 338], 00:25:32.330 | 99.99th=[ 338] 00:25:32.330 bw ( KiB/s): min=57344, max=202240, per=8.35%, avg=114525.70, stdev=38955.60, samples=20 00:25:32.330 iops : min= 224, max= 790, avg=447.30, stdev=152.13, samples=20 00:25:32.330 lat (msec) : 2=0.04%, 4=0.24%, 10=1.90%, 20=3.15%, 50=9.59% 00:25:32.330 lat (msec) : 100=19.42%, 250=52.85%, 500=12.81% 00:25:32.330 cpu : usr=1.40%, sys=1.50%, ctx=3017, majf=0, minf=1 00:25:32.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:32.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.330 issued rwts: total=0,4537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.330 job10: (groupid=0, jobs=1): err= 0: pid=4038033: Thu Jul 25 19:55:40 2024 00:25:32.330 write: IOPS=502, BW=126MiB/s (132MB/s)(1275MiB/10141msec); 0 zone resets 00:25:32.330 slat (usec): min=20, max=103192, avg=1090.93, stdev=3885.30 00:25:32.330 clat (usec): min=1046, max=339689, avg=125929.14, stdev=68286.34 00:25:32.330 lat (usec): min=1085, max=339744, avg=127020.06, stdev=68869.96 00:25:32.330 clat percentiles (msec): 00:25:32.330 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 65], 00:25:32.330 | 30.00th=[ 89], 40.00th=[ 105], 50.00th=[ 127], 60.00th=[ 146], 00:25:32.330 | 70.00th=[ 161], 80.00th=[ 186], 90.00th=[ 220], 95.00th=[ 249], 00:25:32.330 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:25:32.330 | 99.99th=[ 338] 00:25:32.330 bw ( KiB/s): min=63488, max=214016, per=9.39%, avg=128889.60, stdev=38365.14, samples=20 00:25:32.330 iops : min= 248, max= 836, avg=503.45, stdev=149.82, samples=20 00:25:32.330 lat (msec) : 2=0.41%, 4=0.53%, 10=2.10%, 20=3.92%, 50=9.14% 00:25:32.330 lat (msec) : 100=20.87%, 250=58.47%, 500=4.55% 00:25:32.330 cpu : usr=1.43%, sys=1.91%, ctx=3294, majf=0, minf=1 00:25:32.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.330 issued rwts: total=0,5098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.330 00:25:32.330 Run status group 0 (all jobs): 00:25:32.330 WRITE: bw=1340MiB/s (1405MB/s), 102MiB/s-134MiB/s (107MB/s-141MB/s), io=13.3GiB (14.3GB), run=10104-10143msec 00:25:32.330 00:25:32.330 Disk stats (read/write): 00:25:32.330 nvme0n1: ios=48/9816, merge=0/0, ticks=1595/1218169, in_queue=1219764, util=99.51% 00:25:32.330 nvme10n1: ios=49/8042, merge=0/0, ticks=39/1201076, in_queue=1201115, util=97.53% 00:25:32.330 nvme1n1: ios=49/9983, merge=0/0, ticks=2591/1210125, in_queue=1212716, util=99.97% 00:25:32.330 nvme2n1: ios=27/9189, merge=0/0, ticks=431/1215258, in_queue=1215689, util=98.60% 00:25:32.330 nvme3n1: ios=43/10674, merge=0/0, ticks=1000/1219885, in_queue=1220885, util=100.00% 00:25:32.330 nvme4n1: ios=39/10068, merge=0/0, ticks=818/1212515, in_queue=1213333, util=100.00% 00:25:32.330 nvme5n1: ios=40/10418, merge=0/0, ticks=778/1220268, in_queue=1221046, util=100.00% 00:25:32.330 nvme6n1: ios=43/9419, merge=0/0, ticks=961/1217552, in_queue=1218513, util=100.00% 00:25:32.330 nvme7n1: ios=45/9988, merge=0/0, ticks=2314/1188134, in_queue=1190448, util=100.00% 00:25:32.330 nvme8n1: ios=34/8841, merge=0/0, ticks=1275/1217619, in_queue=1218894, util=100.00% 00:25:32.330 nvme9n1: ios=43/10041, merge=0/0, ticks=3107/1210513, in_queue=1213620, util=100.00% 00:25:32.330 19:55:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:32.330 19:55:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:32.330 19:55:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.330 19:55:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:32.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:32.330 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.330 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:32.589 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.589 19:55:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:32.849 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.849 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.850 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.850 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:33.109 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.109 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:33.367 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.367 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:33.625 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:33.625 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:33.625 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.625 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.625 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:33.626 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.626 19:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.626 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.626 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.626 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:33.884 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:33.884 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.884 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:34.146 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.146 rmmod nvme_tcp 00:25:34.146 rmmod nvme_fabrics 00:25:34.146 rmmod nvme_keyring 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4032706 ']' 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4032706 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 4032706 ']' 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 4032706 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4032706 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4032706' 00:25:34.146 killing process with pid 4032706 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 4032706 00:25:34.146 19:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 4032706 00:25:34.716 19:55:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:34.716 19:55:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:34.717 19:55:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:34.717 19:55:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.717 19:55:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:34.717 19:55:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.717 19:55:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.717 19:55:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.259 19:55:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.259 00:25:37.259 real 1m0.923s 00:25:37.259 user 3m24.476s 00:25:37.259 sys 0m24.616s 00:25:37.259 19:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:37.259 19:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.259 ************************************ 00:25:37.259 END TEST nvmf_multiconnection 00:25:37.259 ************************************ 00:25:37.259 19:55:46 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:37.259 19:55:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:37.259 19:55:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:37.259 19:55:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.259 ************************************ 00:25:37.259 START TEST nvmf_initiator_timeout 00:25:37.259 ************************************ 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:37.259 * Looking for test storage... 00:25:37.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.259 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.260 19:55:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:39.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:39.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:39.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:39.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:25:39.163 00:25:39.163 --- 10.0.0.2 ping statistics --- 00:25:39.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.163 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:39.163 00:25:39.163 --- 10.0.0.1 ping statistics --- 00:25:39.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.163 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.163 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4041376 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4041376 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 4041376 ']' 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:39.164 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.164 [2024-07-25 19:55:48.441686] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:25:39.164 [2024-07-25 19:55:48.441770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.164 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.164 [2024-07-25 19:55:48.509835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.422 [2024-07-25 19:55:48.599418] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.422 [2024-07-25 19:55:48.599467] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.422 [2024-07-25 19:55:48.599493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.422 [2024-07-25 19:55:48.599504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.422 [2024-07-25 19:55:48.599515] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.422 [2024-07-25 19:55:48.599615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.422 [2024-07-25 19:55:48.599695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.422 [2024-07-25 19:55:48.599638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.422 [2024-07-25 19:55:48.599698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 Malloc0 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 Delay0 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 [2024-07-25 19:55:48.774199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.422 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.423 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.423 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.423 [2024-07-25 19:55:48.802461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.423 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.423 19:55:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:39.992 19:55:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:39.992 19:55:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:39.992 19:55:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.992 19:55:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:39.992 19:55:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:42.525 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4041798 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:42.526 19:55:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:42.526 [global] 00:25:42.526 thread=1 00:25:42.526 invalidate=1 00:25:42.526 rw=write 00:25:42.526 time_based=1 00:25:42.526 runtime=60 00:25:42.526 ioengine=libaio 00:25:42.526 direct=1 00:25:42.526 bs=4096 00:25:42.526 iodepth=1 00:25:42.526 norandommap=0 00:25:42.526 numjobs=1 00:25:42.526 00:25:42.526 verify_dump=1 00:25:42.526 verify_backlog=512 00:25:42.526 verify_state_save=0 00:25:42.526 do_verify=1 00:25:42.526 verify=crc32c-intel 00:25:42.526 [job0] 00:25:42.526 filename=/dev/nvme0n1 00:25:42.526 Could not set queue depth (nvme0n1) 00:25:42.526 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.526 fio-3.35 00:25:42.526 Starting 1 thread 00:25:45.057 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:45.057 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.057 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.058 true 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.058 true 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.058 true 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.058 true 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.058 19:55:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.348 true 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.348 true 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.348 true 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.348 true 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:48.348 19:55:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4041798 00:26:44.622 00:26:44.622 job0: (groupid=0, jobs=1): err= 0: pid=4041869: Thu Jul 25 19:56:51 2024 00:26:44.622 read: IOPS=153, BW=615KiB/s (630kB/s)(36.1MiB/60025msec) 00:26:44.622 slat (nsec): min=4599, max=75296, avg=12767.04, stdev=8114.04 00:26:44.622 clat (usec): min=235, max=40978k, avg=6218.35, stdev=426554.20 00:26:44.622 lat (usec): min=241, max=40978k, avg=6231.12, stdev=426554.24 00:26:44.622 clat percentiles (usec): 00:26:44.622 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 00:26:44.622 | 20.00th=[ 277], 30.00th=[ 285], 40.00th=[ 293], 00:26:44.622 | 50.00th=[ 302], 60.00th=[ 314], 70.00th=[ 318], 00:26:44.622 | 80.00th=[ 330], 90.00th=[ 375], 95.00th=[ 424], 00:26:44.622 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:44.622 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:44.622 write: IOPS=162, BW=648KiB/s (664kB/s)(38.0MiB/60025msec); 0 zone resets 00:26:44.622 slat (nsec): min=6100, max=83484, avg=15170.80, stdev=9529.16 00:26:44.622 clat (usec): min=175, max=512, avg=234.45, stdev=47.39 00:26:44.622 lat (usec): min=182, max=538, avg=249.62, stdev=54.55 00:26:44.622 clat percentiles (usec): 00:26:44.622 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:26:44.622 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 227], 00:26:44.622 | 70.00th=[ 237], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 347], 00:26:44.622 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 449], 99.95th=[ 486], 00:26:44.622 | 99.99th=[ 515] 00:26:44.622 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6485.33, stdev=1651.56, samples=12 00:26:44.622 iops : min= 1024, max= 2048, avg=1621.33, stdev=412.89, samples=12 00:26:44.622 lat (usec) : 250=40.26%, 500=57.90%, 750=0.06%, 1000=0.02% 00:26:44.622 lat (msec) : 2=0.01%, 50=1.75%, >=2000=0.01% 00:26:44.622 cpu : usr=0.33%, sys=0.51%, ctx=18960, majf=0, minf=2 00:26:44.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.622 issued rwts: total=9231,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.622 00:26:44.622 Run status group 0 (all jobs): 00:26:44.622 READ: bw=615KiB/s (630kB/s), 615KiB/s-615KiB/s (630kB/s-630kB/s), io=36.1MiB (37.8MB), run=60025-60025msec 00:26:44.623 WRITE: bw=648KiB/s (664kB/s), 648KiB/s-648KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60025-60025msec 00:26:44.623 00:26:44.623 Disk stats (read/write): 00:26:44.623 nvme0n1: ios=9326/9728, merge=0/0, ticks=17437/2175, in_queue=19612, util=99.85% 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:44.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:44.623 nvmf hotplug test: fio successful as expected 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.623 rmmod nvme_tcp 00:26:44.623 rmmod nvme_fabrics 00:26:44.623 rmmod nvme_keyring 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4041376 ']' 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4041376 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 4041376 ']' 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 4041376 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:44.623 19:56:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4041376 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4041376' 00:26:44.623 killing process with pid 4041376 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 4041376 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 4041376 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.623 19:56:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.882 19:56:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.882 00:26:44.882 real 1m8.113s 00:26:44.882 user 4m10.251s 00:26:44.882 sys 0m6.961s 00:26:44.882 19:56:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:44.882 19:56:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.882 ************************************ 00:26:44.882 END TEST nvmf_initiator_timeout 00:26:44.882 ************************************ 00:26:45.141 19:56:54 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:45.141 19:56:54 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:45.141 19:56:54 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:45.141 19:56:54 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.141 19:56:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:47.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:47.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:47.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:47.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:47.045 19:56:56 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:47.045 19:56:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:47.045 19:56:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:47.045 19:56:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.045 ************************************ 00:26:47.045 START TEST nvmf_perf_adq 00:26:47.045 ************************************ 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:47.045 * Looking for test storage... 00:26:47.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.045 19:56:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.046 19:56:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.579 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.579 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:49.579 19:56:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:49.839 19:56:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:51.744 19:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:57.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:57.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:57.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:57.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.010 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:26:57.011 00:26:57.011 --- 10.0.0.2 ping statistics --- 00:26:57.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.011 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:26:57.011 00:26:57.011 --- 10.0.0.1 ping statistics --- 00:26:57.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.011 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4053485 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4053485 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4053485 ']' 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:57.011 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.011 [2024-07-25 19:57:06.345717] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:26:57.011 [2024-07-25 19:57:06.345817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.011 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.011 [2024-07-25 19:57:06.415513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.270 [2024-07-25 19:57:06.504748] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.270 [2024-07-25 19:57:06.504805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.270 [2024-07-25 19:57:06.504834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.270 [2024-07-25 19:57:06.504846] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.270 [2024-07-25 19:57:06.504856] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.270 [2024-07-25 19:57:06.505274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.270 [2024-07-25 19:57:06.509079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.270 [2024-07-25 19:57:06.509112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.270 [2024-07-25 19:57:06.509116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.270 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.529 [2024-07-25 19:57:06.745947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.529 Malloc1 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.529 [2024-07-25 19:57:06.799092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4053628 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:57.529 19:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:57.529 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.430 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:59.430 19:57:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.431 19:57:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.431 19:57:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.431 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:59.431 "tick_rate": 2700000000, 00:26:59.431 "poll_groups": [ 00:26:59.431 { 00:26:59.431 "name": "nvmf_tgt_poll_group_000", 00:26:59.431 "admin_qpairs": 1, 00:26:59.431 "io_qpairs": 1, 00:26:59.431 "current_admin_qpairs": 1, 00:26:59.431 "current_io_qpairs": 1, 00:26:59.431 "pending_bdev_io": 0, 00:26:59.431 "completed_nvme_io": 18397, 00:26:59.431 "transports": [ 00:26:59.431 { 00:26:59.431 "trtype": "TCP" 00:26:59.431 } 00:26:59.431 ] 00:26:59.431 }, 00:26:59.431 { 00:26:59.431 "name": "nvmf_tgt_poll_group_001", 00:26:59.431 "admin_qpairs": 0, 00:26:59.431 "io_qpairs": 1, 00:26:59.431 "current_admin_qpairs": 0, 00:26:59.431 "current_io_qpairs": 1, 00:26:59.431 "pending_bdev_io": 0, 00:26:59.431 "completed_nvme_io": 20220, 00:26:59.431 "transports": [ 00:26:59.431 { 00:26:59.431 "trtype": "TCP" 00:26:59.431 } 00:26:59.431 ] 00:26:59.431 }, 00:26:59.431 { 00:26:59.431 "name": "nvmf_tgt_poll_group_002", 00:26:59.431 "admin_qpairs": 0, 00:26:59.431 "io_qpairs": 1, 00:26:59.431 "current_admin_qpairs": 0, 00:26:59.431 "current_io_qpairs": 1, 00:26:59.431 "pending_bdev_io": 0, 00:26:59.431 "completed_nvme_io": 20302, 00:26:59.431 "transports": [ 00:26:59.431 { 00:26:59.431 "trtype": "TCP" 00:26:59.431 } 00:26:59.431 ] 00:26:59.431 }, 00:26:59.431 { 00:26:59.431 "name": "nvmf_tgt_poll_group_003", 00:26:59.431 "admin_qpairs": 0, 00:26:59.431 "io_qpairs": 1, 00:26:59.431 "current_admin_qpairs": 0, 00:26:59.431 "current_io_qpairs": 1, 00:26:59.431 "pending_bdev_io": 0, 00:26:59.431 "completed_nvme_io": 20321, 00:26:59.431 "transports": [ 00:26:59.431 { 00:26:59.431 "trtype": "TCP" 00:26:59.431 } 00:26:59.431 ] 00:26:59.431 } 00:26:59.431 ] 00:26:59.431 }' 00:26:59.431 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:59.431 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:59.689 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:59.689 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:59.689 19:57:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4053628 00:27:07.839 Initializing NVMe Controllers 00:27:07.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:07.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:07.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:07.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:07.839 Initialization complete. Launching workers. 00:27:07.839 ======================================================== 00:27:07.839 Latency(us) 00:27:07.839 Device Information : IOPS MiB/s Average min max 00:27:07.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10713.38 41.85 5975.63 2066.04 10265.98 00:27:07.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10642.18 41.57 6015.72 2498.79 8852.26 00:27:07.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10644.68 41.58 6013.67 2767.20 9061.83 00:27:07.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9677.80 37.80 6615.14 2559.43 10086.40 00:27:07.839 ======================================================== 00:27:07.839 Total : 41678.03 162.80 6144.08 2066.04 10265.98 00:27:07.839 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.839 rmmod nvme_tcp 00:27:07.839 rmmod nvme_fabrics 00:27:07.839 rmmod nvme_keyring 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4053485 ']' 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4053485 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4053485 ']' 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4053485 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:07.839 19:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4053485 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4053485' 00:27:07.839 killing process with pid 4053485 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4053485 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4053485 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.839 19:57:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.379 19:57:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.379 19:57:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:10.379 19:57:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:10.638 19:57:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:12.540 19:57:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:17.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:17.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.816 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:17.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:17.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.817 19:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:27:17.817 00:27:17.817 --- 10.0.0.2 ping statistics --- 00:27:17.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.817 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:27:17.817 00:27:17.817 --- 10.0.0.1 ping statistics --- 00:27:17.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.817 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:17.817 net.core.busy_poll = 1 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:17.817 net.core.busy_read = 1 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4056757 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4056757 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4056757 ']' 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.817 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.817 [2024-07-25 19:57:27.217495] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:17.817 [2024-07-25 19:57:27.217567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.077 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.077 [2024-07-25 19:57:27.284557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.077 [2024-07-25 19:57:27.373773] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.077 [2024-07-25 19:57:27.373833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.077 [2024-07-25 19:57:27.373859] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.077 [2024-07-25 19:57:27.373872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.077 [2024-07-25 19:57:27.373884] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.077 [2024-07-25 19:57:27.373988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.077 [2024-07-25 19:57:27.374056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.077 [2024-07-25 19:57:27.374149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.077 [2024-07-25 19:57:27.374153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.077 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 [2024-07-25 19:57:27.608769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 Malloc1 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 [2024-07-25 19:57:27.659836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4056795 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:18.337 19:57:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.337 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.867 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:20.867 19:57:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.867 19:57:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.867 19:57:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.867 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:20.867 "tick_rate": 2700000000, 00:27:20.867 "poll_groups": [ 00:27:20.867 { 00:27:20.867 "name": "nvmf_tgt_poll_group_000", 00:27:20.867 "admin_qpairs": 1, 00:27:20.867 "io_qpairs": 3, 00:27:20.867 "current_admin_qpairs": 1, 00:27:20.867 "current_io_qpairs": 3, 00:27:20.867 "pending_bdev_io": 0, 00:27:20.867 "completed_nvme_io": 25981, 00:27:20.867 "transports": [ 00:27:20.867 { 00:27:20.867 "trtype": "TCP" 00:27:20.867 } 00:27:20.867 ] 00:27:20.867 }, 00:27:20.867 { 00:27:20.867 "name": "nvmf_tgt_poll_group_001", 00:27:20.867 "admin_qpairs": 0, 00:27:20.867 "io_qpairs": 1, 00:27:20.867 "current_admin_qpairs": 0, 00:27:20.867 "current_io_qpairs": 1, 00:27:20.867 "pending_bdev_io": 0, 00:27:20.867 "completed_nvme_io": 24962, 00:27:20.867 "transports": [ 00:27:20.867 { 00:27:20.867 "trtype": "TCP" 00:27:20.867 } 00:27:20.867 ] 00:27:20.867 }, 00:27:20.867 { 00:27:20.867 "name": "nvmf_tgt_poll_group_002", 00:27:20.867 "admin_qpairs": 0, 00:27:20.867 "io_qpairs": 0, 00:27:20.867 "current_admin_qpairs": 0, 00:27:20.867 "current_io_qpairs": 0, 00:27:20.867 "pending_bdev_io": 0, 00:27:20.867 "completed_nvme_io": 0, 00:27:20.867 "transports": [ 00:27:20.867 { 00:27:20.867 "trtype": "TCP" 00:27:20.867 } 00:27:20.867 ] 00:27:20.867 }, 00:27:20.867 { 00:27:20.867 "name": "nvmf_tgt_poll_group_003", 00:27:20.867 "admin_qpairs": 0, 00:27:20.867 "io_qpairs": 0, 00:27:20.867 "current_admin_qpairs": 0, 00:27:20.867 "current_io_qpairs": 0, 00:27:20.867 "pending_bdev_io": 0, 00:27:20.867 "completed_nvme_io": 0, 00:27:20.867 "transports": [ 00:27:20.867 { 00:27:20.867 "trtype": "TCP" 00:27:20.867 } 00:27:20.867 ] 00:27:20.867 } 00:27:20.867 ] 00:27:20.867 }' 00:27:20.868 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:20.868 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:20.868 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:20.868 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:20.868 19:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4056795 00:27:28.978 Initializing NVMe Controllers 00:27:28.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:28.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:28.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:28.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:28.978 Initialization complete. Launching workers. 00:27:28.978 ======================================================== 00:27:28.978 Latency(us) 00:27:28.978 Device Information : IOPS MiB/s Average min max 00:27:28.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13340.10 52.11 4798.05 1276.50 45935.02 00:27:28.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4719.70 18.44 13562.26 1866.53 60598.95 00:27:28.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5231.90 20.44 12234.76 1618.41 59874.80 00:27:28.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3706.10 14.48 17274.04 2740.66 61996.91 00:27:28.978 ======================================================== 00:27:28.978 Total : 26997.79 105.46 9483.98 1276.50 61996.91 00:27:28.978 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.978 rmmod nvme_tcp 00:27:28.978 rmmod nvme_fabrics 00:27:28.978 rmmod nvme_keyring 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4056757 ']' 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4056757 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4056757 ']' 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4056757 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4056757 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4056757' 00:27:28.978 killing process with pid 4056757 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4056757 00:27:28.978 19:57:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4056757 00:27:28.978 19:57:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.978 19:57:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.978 19:57:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.979 19:57:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.979 19:57:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.979 19:57:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.979 19:57:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.979 19:57:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.266 19:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.266 19:57:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:32.266 00:27:32.266 real 0m44.800s 00:27:32.266 user 2m35.254s 00:27:32.266 sys 0m11.212s 00:27:32.266 19:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:32.266 19:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.266 ************************************ 00:27:32.266 END TEST nvmf_perf_adq 00:27:32.266 ************************************ 00:27:32.266 19:57:41 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:32.267 19:57:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:32.267 19:57:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:32.267 19:57:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.267 ************************************ 00:27:32.267 START TEST nvmf_shutdown 00:27:32.267 ************************************ 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:32.267 * Looking for test storage... 00:27:32.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:32.267 ************************************ 00:27:32.267 START TEST nvmf_shutdown_tc1 00:27:32.267 ************************************ 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.267 19:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.171 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.171 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.171 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.171 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:34.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:34.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:34.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:34.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:27:34.172 00:27:34.172 --- 10.0.0.2 ping statistics --- 00:27:34.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.172 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:27:34.172 00:27:34.172 --- 10.0.0.1 ping statistics --- 00:27:34.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.172 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4060070 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4060070 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4060070 ']' 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.172 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.172 [2024-07-25 19:57:43.404862] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:34.172 [2024-07-25 19:57:43.404951] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.172 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.172 [2024-07-25 19:57:43.475219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.172 [2024-07-25 19:57:43.567642] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.172 [2024-07-25 19:57:43.567705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.172 [2024-07-25 19:57:43.567732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.172 [2024-07-25 19:57:43.567746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.172 [2024-07-25 19:57:43.567758] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.172 [2024-07-25 19:57:43.567883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.172 [2024-07-25 19:57:43.567966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.172 [2024-07-25 19:57:43.568034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:34.172 [2024-07-25 19:57:43.568036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.430 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:34.430 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.431 [2024-07-25 19:57:43.711785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.431 19:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.431 Malloc1 00:27:34.431 [2024-07-25 19:57:43.792618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.431 Malloc2 00:27:34.689 Malloc3 00:27:34.689 Malloc4 00:27:34.689 Malloc5 00:27:34.689 Malloc6 00:27:34.689 Malloc7 00:27:34.689 Malloc8 00:27:34.947 Malloc9 00:27:34.947 Malloc10 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4060250 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4060250 /var/tmp/bdevperf.sock 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4060250 ']' 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:34.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.947 { 00:27:34.947 "params": { 00:27:34.947 "name": "Nvme$subsystem", 00:27:34.947 "trtype": "$TEST_TRANSPORT", 00:27:34.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.947 "adrfam": "ipv4", 00:27:34.947 "trsvcid": "$NVMF_PORT", 00:27:34.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.947 "hdgst": ${hdgst:-false}, 00:27:34.947 "ddgst": ${ddgst:-false} 00:27:34.947 }, 00:27:34.947 "method": "bdev_nvme_attach_controller" 00:27:34.947 } 00:27:34.947 EOF 00:27:34.947 )") 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.947 { 00:27:34.947 "params": { 00:27:34.947 "name": "Nvme$subsystem", 00:27:34.947 "trtype": "$TEST_TRANSPORT", 00:27:34.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.947 "adrfam": "ipv4", 00:27:34.947 "trsvcid": "$NVMF_PORT", 00:27:34.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.947 "hdgst": ${hdgst:-false}, 00:27:34.947 "ddgst": ${ddgst:-false} 00:27:34.947 }, 00:27:34.947 "method": "bdev_nvme_attach_controller" 00:27:34.947 } 00:27:34.947 EOF 00:27:34.947 )") 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.947 { 00:27:34.947 "params": { 00:27:34.947 "name": "Nvme$subsystem", 00:27:34.947 "trtype": "$TEST_TRANSPORT", 00:27:34.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.947 "adrfam": "ipv4", 00:27:34.947 "trsvcid": "$NVMF_PORT", 00:27:34.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.947 "hdgst": ${hdgst:-false}, 00:27:34.947 "ddgst": ${ddgst:-false} 00:27:34.947 }, 00:27:34.947 "method": "bdev_nvme_attach_controller" 00:27:34.947 } 00:27:34.947 EOF 00:27:34.947 )") 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.947 { 00:27:34.947 "params": { 00:27:34.947 "name": "Nvme$subsystem", 00:27:34.947 "trtype": "$TEST_TRANSPORT", 00:27:34.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.947 "adrfam": "ipv4", 00:27:34.947 "trsvcid": "$NVMF_PORT", 00:27:34.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.947 "hdgst": ${hdgst:-false}, 00:27:34.947 "ddgst": ${ddgst:-false} 00:27:34.947 }, 00:27:34.947 "method": "bdev_nvme_attach_controller" 00:27:34.947 } 00:27:34.947 EOF 00:27:34.947 )") 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.947 { 00:27:34.947 "params": { 00:27:34.947 "name": "Nvme$subsystem", 00:27:34.947 "trtype": "$TEST_TRANSPORT", 00:27:34.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.947 "adrfam": "ipv4", 00:27:34.947 "trsvcid": "$NVMF_PORT", 00:27:34.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.947 "hdgst": ${hdgst:-false}, 00:27:34.947 "ddgst": ${ddgst:-false} 00:27:34.947 }, 00:27:34.947 "method": "bdev_nvme_attach_controller" 00:27:34.947 } 00:27:34.947 EOF 00:27:34.947 )") 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.947 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.947 { 00:27:34.947 "params": { 00:27:34.947 "name": "Nvme$subsystem", 00:27:34.947 "trtype": "$TEST_TRANSPORT", 00:27:34.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.947 "adrfam": "ipv4", 00:27:34.947 "trsvcid": "$NVMF_PORT", 00:27:34.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.947 "hdgst": ${hdgst:-false}, 00:27:34.947 "ddgst": ${ddgst:-false} 00:27:34.947 }, 00:27:34.947 "method": "bdev_nvme_attach_controller" 00:27:34.947 } 00:27:34.948 EOF 00:27:34.948 )") 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.948 { 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme$subsystem", 00:27:34.948 "trtype": "$TEST_TRANSPORT", 00:27:34.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "$NVMF_PORT", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.948 "hdgst": ${hdgst:-false}, 00:27:34.948 "ddgst": ${ddgst:-false} 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 } 00:27:34.948 EOF 00:27:34.948 )") 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.948 { 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme$subsystem", 00:27:34.948 "trtype": "$TEST_TRANSPORT", 00:27:34.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "$NVMF_PORT", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.948 "hdgst": ${hdgst:-false}, 00:27:34.948 "ddgst": ${ddgst:-false} 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 } 00:27:34.948 EOF 00:27:34.948 )") 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.948 { 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme$subsystem", 00:27:34.948 "trtype": "$TEST_TRANSPORT", 00:27:34.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "$NVMF_PORT", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.948 "hdgst": ${hdgst:-false}, 00:27:34.948 "ddgst": ${ddgst:-false} 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 } 00:27:34.948 EOF 00:27:34.948 )") 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.948 { 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme$subsystem", 00:27:34.948 "trtype": "$TEST_TRANSPORT", 00:27:34.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "$NVMF_PORT", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.948 "hdgst": ${hdgst:-false}, 00:27:34.948 "ddgst": ${ddgst:-false} 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 } 00:27:34.948 EOF 00:27:34.948 )") 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:34.948 19:57:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme1", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme2", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme3", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme4", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme5", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme6", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme7", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme8", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme9", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.948 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:34.948 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:34.948 "hdgst": false, 00:27:34.948 "ddgst": false 00:27:34.948 }, 00:27:34.948 "method": "bdev_nvme_attach_controller" 00:27:34.948 },{ 00:27:34.948 "params": { 00:27:34.948 "name": "Nvme10", 00:27:34.948 "trtype": "tcp", 00:27:34.948 "traddr": "10.0.0.2", 00:27:34.948 "adrfam": "ipv4", 00:27:34.948 "trsvcid": "4420", 00:27:34.949 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:34.949 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:34.949 "hdgst": false, 00:27:34.949 "ddgst": false 00:27:34.949 }, 00:27:34.949 "method": "bdev_nvme_attach_controller" 00:27:34.949 }' 00:27:34.949 [2024-07-25 19:57:44.288673] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:34.949 [2024-07-25 19:57:44.288761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:34.949 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.949 [2024-07-25 19:57:44.353126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.244 [2024-07-25 19:57:44.440628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4060250 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:37.146 19:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:38.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4060250 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4060070 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.081 { 00:27:38.081 "params": { 00:27:38.081 "name": "Nvme$subsystem", 00:27:38.081 "trtype": "$TEST_TRANSPORT", 00:27:38.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.081 "adrfam": "ipv4", 00:27:38.081 "trsvcid": "$NVMF_PORT", 00:27:38.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.081 "hdgst": ${hdgst:-false}, 00:27:38.081 "ddgst": ${ddgst:-false} 00:27:38.081 }, 00:27:38.081 "method": "bdev_nvme_attach_controller" 00:27:38.081 } 00:27:38.081 EOF 00:27:38.081 )") 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.081 { 00:27:38.081 "params": { 00:27:38.081 "name": "Nvme$subsystem", 00:27:38.081 "trtype": "$TEST_TRANSPORT", 00:27:38.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.081 "adrfam": "ipv4", 00:27:38.081 "trsvcid": "$NVMF_PORT", 00:27:38.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.081 "hdgst": ${hdgst:-false}, 00:27:38.081 "ddgst": ${ddgst:-false} 00:27:38.081 }, 00:27:38.081 "method": "bdev_nvme_attach_controller" 00:27:38.081 } 00:27:38.081 EOF 00:27:38.081 )") 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.081 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.081 { 00:27:38.081 "params": { 00:27:38.081 "name": "Nvme$subsystem", 00:27:38.081 "trtype": "$TEST_TRANSPORT", 00:27:38.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.081 "adrfam": "ipv4", 00:27:38.081 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.082 { 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme$subsystem", 00:27:38.082 "trtype": "$TEST_TRANSPORT", 00:27:38.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "$NVMF_PORT", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.082 "hdgst": ${hdgst:-false}, 00:27:38.082 "ddgst": ${ddgst:-false} 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 } 00:27:38.082 EOF 00:27:38.082 )") 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:38.082 19:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme1", 00:27:38.082 "trtype": "tcp", 00:27:38.082 "traddr": "10.0.0.2", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "4420", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.082 "hdgst": false, 00:27:38.082 "ddgst": false 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 },{ 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme2", 00:27:38.082 "trtype": "tcp", 00:27:38.082 "traddr": "10.0.0.2", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "4420", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.082 "hdgst": false, 00:27:38.082 "ddgst": false 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 },{ 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme3", 00:27:38.082 "trtype": "tcp", 00:27:38.082 "traddr": "10.0.0.2", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "4420", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:38.082 "hdgst": false, 00:27:38.082 "ddgst": false 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 },{ 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme4", 00:27:38.082 "trtype": "tcp", 00:27:38.082 "traddr": "10.0.0.2", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "4420", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:38.082 "hdgst": false, 00:27:38.082 "ddgst": false 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 },{ 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme5", 00:27:38.082 "trtype": "tcp", 00:27:38.082 "traddr": "10.0.0.2", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "4420", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:38.082 "hdgst": false, 00:27:38.082 "ddgst": false 00:27:38.082 }, 00:27:38.082 "method": "bdev_nvme_attach_controller" 00:27:38.082 },{ 00:27:38.082 "params": { 00:27:38.082 "name": "Nvme6", 00:27:38.082 "trtype": "tcp", 00:27:38.082 "traddr": "10.0.0.2", 00:27:38.082 "adrfam": "ipv4", 00:27:38.082 "trsvcid": "4420", 00:27:38.082 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:38.082 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:38.082 "hdgst": false, 00:27:38.082 "ddgst": false 00:27:38.082 }, 00:27:38.083 "method": "bdev_nvme_attach_controller" 00:27:38.083 },{ 00:27:38.083 "params": { 00:27:38.083 "name": "Nvme7", 00:27:38.083 "trtype": "tcp", 00:27:38.083 "traddr": "10.0.0.2", 00:27:38.083 "adrfam": "ipv4", 00:27:38.083 "trsvcid": "4420", 00:27:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:38.083 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:38.083 "hdgst": false, 00:27:38.083 "ddgst": false 00:27:38.083 }, 00:27:38.083 "method": "bdev_nvme_attach_controller" 00:27:38.083 },{ 00:27:38.083 "params": { 00:27:38.083 "name": "Nvme8", 00:27:38.083 "trtype": "tcp", 00:27:38.083 "traddr": "10.0.0.2", 00:27:38.083 "adrfam": "ipv4", 00:27:38.083 "trsvcid": "4420", 00:27:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:38.083 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:38.083 "hdgst": false, 00:27:38.083 "ddgst": false 00:27:38.083 }, 00:27:38.083 "method": "bdev_nvme_attach_controller" 00:27:38.083 },{ 00:27:38.083 "params": { 00:27:38.083 "name": "Nvme9", 00:27:38.083 "trtype": "tcp", 00:27:38.083 "traddr": "10.0.0.2", 00:27:38.083 "adrfam": "ipv4", 00:27:38.083 "trsvcid": "4420", 00:27:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:38.083 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:38.083 "hdgst": false, 00:27:38.083 "ddgst": false 00:27:38.083 }, 00:27:38.083 "method": "bdev_nvme_attach_controller" 00:27:38.083 },{ 00:27:38.083 "params": { 00:27:38.083 "name": "Nvme10", 00:27:38.083 "trtype": "tcp", 00:27:38.083 "traddr": "10.0.0.2", 00:27:38.083 "adrfam": "ipv4", 00:27:38.083 "trsvcid": "4420", 00:27:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:38.083 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:38.083 "hdgst": false, 00:27:38.083 "ddgst": false 00:27:38.083 }, 00:27:38.083 "method": "bdev_nvme_attach_controller" 00:27:38.083 }' 00:27:38.083 [2024-07-25 19:57:47.311243] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:38.083 [2024-07-25 19:57:47.311321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060566 ] 00:27:38.083 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.083 [2024-07-25 19:57:47.378889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.083 [2024-07-25 19:57:47.469447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.984 Running I/O for 1 seconds... 00:27:40.918 00:27:40.918 Latency(us) 00:27:40.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.918 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme1n1 : 1.10 235.79 14.74 0.00 0.00 266469.56 11650.84 251658.24 00:27:40.918 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme2n1 : 1.09 233.84 14.61 0.00 0.00 265920.47 18835.53 248551.35 00:27:40.918 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme3n1 : 1.17 273.83 17.11 0.00 0.00 224038.00 16214.09 250104.79 00:27:40.918 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme4n1 : 1.10 245.89 15.37 0.00 0.00 241457.77 4903.06 237677.23 00:27:40.918 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme5n1 : 1.18 217.55 13.60 0.00 0.00 273105.16 22524.97 268746.15 00:27:40.918 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme6n1 : 1.17 222.30 13.89 0.00 0.00 261146.96 8155.59 250104.79 00:27:40.918 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme7n1 : 1.19 269.54 16.85 0.00 0.00 212669.02 6262.33 257872.02 00:27:40.918 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme8n1 : 1.14 225.13 14.07 0.00 0.00 249859.60 21262.79 251658.24 00:27:40.918 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme9n1 : 1.18 216.54 13.53 0.00 0.00 256645.88 21359.88 282727.16 00:27:40.918 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.918 Verification LBA range: start 0x0 length 0x400 00:27:40.918 Nvme10n1 : 1.19 268.40 16.77 0.00 0.00 203805.43 13301.38 260978.92 00:27:40.918 =================================================================================================================== 00:27:40.918 Total : 2408.80 150.55 0.00 0.00 243328.71 4903.06 282727.16 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.176 rmmod nvme_tcp 00:27:41.176 rmmod nvme_fabrics 00:27:41.176 rmmod nvme_keyring 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4060070 ']' 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4060070 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 4060070 ']' 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 4060070 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4060070 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4060070' 00:27:41.176 killing process with pid 4060070 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 4060070 00:27:41.176 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 4060070 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.744 19:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.649 00:27:43.649 real 0m11.722s 00:27:43.649 user 0m34.314s 00:27:43.649 sys 0m3.079s 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.649 ************************************ 00:27:43.649 END TEST nvmf_shutdown_tc1 00:27:43.649 ************************************ 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.649 19:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:43.907 ************************************ 00:27:43.907 START TEST nvmf_shutdown_tc2 00:27:43.907 ************************************ 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.907 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:27:43.908 00:27:43.908 --- 10.0.0.2 ping statistics --- 00:27:43.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.908 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:27:43.908 00:27:43.908 --- 10.0.0.1 ping statistics --- 00:27:43.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.908 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4061433 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4061433 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4061433 ']' 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.908 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.909 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.909 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.909 [2024-07-25 19:57:53.307747] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:43.909 [2024-07-25 19:57:53.307841] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.164 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.164 [2024-07-25 19:57:53.378882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.165 [2024-07-25 19:57:53.469773] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.165 [2024-07-25 19:57:53.469837] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.165 [2024-07-25 19:57:53.469863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.165 [2024-07-25 19:57:53.469877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.165 [2024-07-25 19:57:53.469889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.165 [2024-07-25 19:57:53.469973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.165 [2024-07-25 19:57:53.470093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.165 [2024-07-25 19:57:53.470169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:44.165 [2024-07-25 19:57:53.470172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.421 [2024-07-25 19:57:53.619739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.421 19:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.421 Malloc1 00:27:44.421 [2024-07-25 19:57:53.694970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.421 Malloc2 00:27:44.421 Malloc3 00:27:44.421 Malloc4 00:27:44.678 Malloc5 00:27:44.678 Malloc6 00:27:44.678 Malloc7 00:27:44.678 Malloc8 00:27:44.678 Malloc9 00:27:44.678 Malloc10 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4061565 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4061565 /var/tmp/bdevperf.sock 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4061565 ']' 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:44.936 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.936 { 00:27:44.936 "params": { 00:27:44.936 "name": "Nvme$subsystem", 00:27:44.936 "trtype": "$TEST_TRANSPORT", 00:27:44.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.936 "adrfam": "ipv4", 00:27:44.936 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:44.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.937 { 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme$subsystem", 00:27:44.937 "trtype": "$TEST_TRANSPORT", 00:27:44.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "$NVMF_PORT", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.937 "hdgst": ${hdgst:-false}, 00:27:44.937 "ddgst": ${ddgst:-false} 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 } 00:27:44.937 EOF 00:27:44.937 )") 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:44.937 19:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme1", 00:27:44.937 "trtype": "tcp", 00:27:44.937 "traddr": "10.0.0.2", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "4420", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.937 "hdgst": false, 00:27:44.937 "ddgst": false 00:27:44.937 }, 00:27:44.937 "method": "bdev_nvme_attach_controller" 00:27:44.937 },{ 00:27:44.937 "params": { 00:27:44.937 "name": "Nvme2", 00:27:44.937 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme3", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme4", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme5", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme6", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme7", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme8", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme9", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 },{ 00:27:44.938 "params": { 00:27:44.938 "name": "Nvme10", 00:27:44.938 "trtype": "tcp", 00:27:44.938 "traddr": "10.0.0.2", 00:27:44.938 "adrfam": "ipv4", 00:27:44.938 "trsvcid": "4420", 00:27:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:44.938 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:44.938 "hdgst": false, 00:27:44.938 "ddgst": false 00:27:44.938 }, 00:27:44.938 "method": "bdev_nvme_attach_controller" 00:27:44.938 }' 00:27:44.938 [2024-07-25 19:57:54.186623] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:44.938 [2024-07-25 19:57:54.186715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061565 ] 00:27:44.938 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.938 [2024-07-25 19:57:54.251756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.938 [2024-07-25 19:57:54.338340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.837 Running I/O for 10 seconds... 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:46.837 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:47.095 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:47.095 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:47.095 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.095 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.095 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.096 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.096 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.096 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:47.096 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:47.096 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:47.352 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:47.352 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:47.352 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.352 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.352 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.352 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4061565 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4061565 ']' 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4061565 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4061565 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4061565' 00:27:47.612 killing process with pid 4061565 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4061565 00:27:47.612 19:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4061565 00:27:47.612 Received shutdown signal, test time was about 0.927338 seconds 00:27:47.612 00:27:47.612 Latency(us) 00:27:47.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme1n1 : 0.89 220.46 13.78 0.00 0.00 282721.86 7670.14 254765.13 00:27:47.612 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme2n1 : 0.89 215.64 13.48 0.00 0.00 286579.17 19320.98 251658.24 00:27:47.612 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme3n1 : 0.92 277.45 17.34 0.00 0.00 218736.45 22913.33 253211.69 00:27:47.612 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme4n1 : 0.92 278.23 17.39 0.00 0.00 213193.77 20291.89 246997.90 00:27:47.612 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme5n1 : 0.93 280.62 17.54 0.00 0.00 206709.32 1723.35 229910.00 00:27:47.612 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme6n1 : 0.91 211.82 13.24 0.00 0.00 268078.40 21554.06 260978.92 00:27:47.612 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme7n1 : 0.88 217.70 13.61 0.00 0.00 253921.66 21068.61 250104.79 00:27:47.612 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme8n1 : 0.90 213.33 13.33 0.00 0.00 253840.24 18447.17 240784.12 00:27:47.612 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme9n1 : 0.91 210.66 13.17 0.00 0.00 251860.26 24369.68 262532.36 00:27:47.612 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.612 Verification LBA range: start 0x0 length 0x400 00:27:47.612 Nvme10n1 : 0.92 209.45 13.09 0.00 0.00 247773.11 21068.61 279620.27 00:27:47.612 =================================================================================================================== 00:27:47.612 Total : 2335.35 145.96 0.00 0.00 245133.86 1723.35 279620.27 00:27:47.872 19:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4061433 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.806 rmmod nvme_tcp 00:27:48.806 rmmod nvme_fabrics 00:27:48.806 rmmod nvme_keyring 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4061433 ']' 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4061433 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4061433 ']' 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4061433 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4061433 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4061433' 00:27:48.806 killing process with pid 4061433 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4061433 00:27:48.806 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4061433 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.376 19:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:51.915 00:27:51.915 real 0m7.662s 00:27:51.915 user 0m23.167s 00:27:51.915 sys 0m1.477s 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.915 ************************************ 00:27:51.915 END TEST nvmf_shutdown_tc2 00:27:51.915 ************************************ 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.915 ************************************ 00:27:51.915 START TEST nvmf_shutdown_tc3 00:27:51.915 ************************************ 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.915 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:27:51.916 00:27:51.916 --- 10.0.0.2 ping statistics --- 00:27:51.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.916 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:27:51.916 00:27:51.916 --- 10.0.0.1 ping statistics --- 00:27:51.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.916 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4062509 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4062509 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4062509 ']' 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:51.916 19:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.916 [2024-07-25 19:58:01.025253] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:51.916 [2024-07-25 19:58:01.025355] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.916 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.916 [2024-07-25 19:58:01.100547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.916 [2024-07-25 19:58:01.196343] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.916 [2024-07-25 19:58:01.196409] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.916 [2024-07-25 19:58:01.196426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.916 [2024-07-25 19:58:01.196439] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.916 [2024-07-25 19:58:01.196452] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.916 [2024-07-25 19:58:01.196544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.916 [2024-07-25 19:58:01.196661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.916 [2024-07-25 19:58:01.196732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.916 [2024-07-25 19:58:01.196730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:51.916 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.916 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:51.916 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.916 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.916 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.916 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.174 [2024-07-25 19:58:01.347803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.174 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.175 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.175 Malloc1 00:27:52.175 [2024-07-25 19:58:01.436959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.175 Malloc2 00:27:52.175 Malloc3 00:27:52.175 Malloc4 00:27:52.432 Malloc5 00:27:52.432 Malloc6 00:27:52.432 Malloc7 00:27:52.432 Malloc8 00:27:52.432 Malloc9 00:27:52.432 Malloc10 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4062590 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4062590 /var/tmp/bdevperf.sock 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4062590 ']' 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:52.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:52.690 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.691 { 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme$subsystem", 00:27:52.691 "trtype": "$TEST_TRANSPORT", 00:27:52.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "$NVMF_PORT", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.691 "hdgst": ${hdgst:-false}, 00:27:52.691 "ddgst": ${ddgst:-false} 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 } 00:27:52.691 EOF 00:27:52.691 )") 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:52.691 19:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:52.691 "params": { 00:27:52.691 "name": "Nvme1", 00:27:52.691 "trtype": "tcp", 00:27:52.691 "traddr": "10.0.0.2", 00:27:52.691 "adrfam": "ipv4", 00:27:52.691 "trsvcid": "4420", 00:27:52.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.691 "hdgst": false, 00:27:52.691 "ddgst": false 00:27:52.691 }, 00:27:52.691 "method": "bdev_nvme_attach_controller" 00:27:52.691 },{ 00:27:52.691 "params": { 00:27:52.692 "name": "Nvme2", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme3", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme4", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme5", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme6", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme7", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme8", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme9", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 },{ 00:27:52.692 "params": { 00:27:52.692 "name": "Nvme10", 00:27:52.692 "trtype": "tcp", 00:27:52.692 "traddr": "10.0.0.2", 00:27:52.692 "adrfam": "ipv4", 00:27:52.692 "trsvcid": "4420", 00:27:52.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:52.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:52.692 "hdgst": false, 00:27:52.692 "ddgst": false 00:27:52.692 }, 00:27:52.692 "method": "bdev_nvme_attach_controller" 00:27:52.692 }' 00:27:52.692 [2024-07-25 19:58:01.937794] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:52.692 [2024-07-25 19:58:01.937887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062590 ] 00:27:52.692 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.692 [2024-07-25 19:58:02.002345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.692 [2024-07-25 19:58:02.091138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.598 Running I/O for 10 seconds... 00:27:54.598 19:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:54.598 19:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:54.598 19:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:54.598 19:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.598 19:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.598 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.887 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:54.887 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:54.887 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:54.887 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:54.887 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:55.147 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4062509 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 4062509 ']' 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 4062509 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4062509 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4062509' 00:27:55.416 killing process with pid 4062509 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 4062509 00:27:55.416 19:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 4062509 00:27:55.417 [2024-07-25 19:58:04.654569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.654988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.655830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c560 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.657927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.417 [2024-07-25 19:58:04.657973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.417 [2024-07-25 19:58:04.657991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.417 [2024-07-25 19:58:04.658005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.417 [2024-07-25 19:58:04.658019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.417 [2024-07-25 19:58:04.658032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.417 [2024-07-25 19:58:04.658047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.417 [2024-07-25 19:58:04.658067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.417 [2024-07-25 19:58:04.658094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4d300 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.658501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.658564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.658582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.658595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.658646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.417 [2024-07-25 19:58:04.658661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.658980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.659973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.660019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.660039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.660085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fb10 is same with the state(5) to be set 00:27:55.418 [2024-07-25 19:58:04.661044] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.418 [2024-07-25 19:58:04.663514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.418 [2024-07-25 19:58:04.663852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.418 [2024-07-25 19:58:04.663866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.663882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.663896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.663912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.663927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.663943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.663958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.663975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.663989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.419 [2024-07-25 19:58:04.664821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.419 [2024-07-25 19:58:04.664835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.664866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.664881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.664924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.664940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.664958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.664974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.664987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665619] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c261d0 was disconnected and freed. reset controller. 00:27:55.420 [2024-07-25 19:58:04.665663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-25 19:58:04.665844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 he state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:58:04.665858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 he state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.420 [2024-07-25 19:58:04.665900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.420 [2024-07-25 19:58:04.665908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.420 [2024-07-25 19:58:04.665913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.665923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.665927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.665939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-07-25 19:58:04.665940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 he state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.665955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.665955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:55.421 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.665969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.665974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.665983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.665988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.665999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.666026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:58:04.666051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 he state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.666112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.666113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1he state(5) to be set 00:27:55.421 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.666140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.666168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:58:04.666195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 he state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.666238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.421 [2024-07-25 19:58:04.666251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.421 [2024-07-25 19:58:04.666264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.421 [2024-07-25 19:58:04.666277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-07-25 19:58:04.666278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 he state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:58:04.666293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 he state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.666311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1he state(5) to be set 00:27:55.422 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.666326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:55.422 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.666444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128he state(5) to be set 00:27:55.422 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.666540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128he state(5) to be set 00:27:55.422 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with t[2024-07-25 19:58:04.666557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:55.422 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d360 is same with the state(5) to be set 00:27:55.422 [2024-07-25 19:58:04.666589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.666977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.666990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.667009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.667023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.667046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.667066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.667111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.667126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.667142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.422 [2024-07-25 19:58:04.667156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.422 [2024-07-25 19:58:04.667171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t[2024-07-25 19:58:04.667593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:55.423 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12[2024-07-25 19:58:04.667661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 he state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:58:04.667675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 he state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t[2024-07-25 19:58:04.667692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12he state(5) to be set 00:27:55.423 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t[2024-07-25 19:58:04.667707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:55.423 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 [2024-07-25 19:58:04.667760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 [2024-07-25 19:58:04.667773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12[2024-07-25 19:58:04.667787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.423 he state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:58:04.667802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.423 he state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667886] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca3cd0 was disconnected and freed. reset controller. 00:27:55.423 [2024-07-25 19:58:04.667894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.667985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.423 [2024-07-25 19:58:04.668195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668325] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.424 [2024-07-25 19:58:04.668336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 19:58:04.668497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 he state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t[2024-07-25 19:58:04.668512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:27:55.424 id:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 19:58:04.668565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 he state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d800 is same with t[2024-07-25 19:58:04.668579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5f90 is same he state(5) to be set 00:27:55.424 with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78810 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.668941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.668955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4b190 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.668993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7bf90 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.669157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4d300 (9): Bad file descriptor 00:27:55.424 [2024-07-25 19:58:04.669209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.424 [2024-07-25 19:58:04.669317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.424 [2024-07-25 19:58:04.669330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b706b0 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.669932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.669975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.424 [2024-07-25 19:58:04.669990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.670800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77dcc0 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:55.425 [2024-07-25 19:58:04.672076] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:55.425 [2024-07-25 19:58:04.672101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78810 (9): Bad file descriptor 00:27:55.425 [2024-07-25 19:58:04.672123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4b190 (9): Bad file descriptor 00:27:55.425 [2024-07-25 19:58:04.672259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.425 [2024-07-25 19:58:04.672599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.672992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e160 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.426 [2024-07-25 19:58:04.673614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4b190 with addr=10.0.0.2, port=4420 00:27:55.426 [2024-07-25 19:58:04.673631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4b190 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.426 [2024-07-25 19:58:04.673760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b78810 with addr=10.0.0.2, port=4420 00:27:55.426 [2024-07-25 19:58:04.673785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78810 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.673876] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.426 [2024-07-25 19:58:04.674232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4b190 (9): Bad file descriptor 00:27:55.426 [2024-07-25 19:58:04.674260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78810 (9): Bad file descriptor 00:27:55.426 [2024-07-25 19:58:04.674297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674369] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.426 [2024-07-25 19:58:04.674377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.426 [2024-07-25 19:58:04.674436] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.427 [2024-07-25 19:58:04.674445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:55.427 [2024-07-25 19:58:04.674644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674656] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] contr[2024-07-25 19:58:04.674657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with toller reinitialization failed 00:27:55.427 he state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:55.427 [2024-07-25 19:58:04.674686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674697] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:55.427 [2024-07-25 19:58:04.674700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:55.427 [2024-07-25 19:58:04.674714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674725] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:55.427 [2024-07-25 19:58:04.674727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.427 [2024-07-25 19:58:04.674969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.427 [2024-07-25 19:58:04.674983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.674996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e620 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675277] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.427 [2024-07-25 19:58:04.675919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.675998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.427 [2024-07-25 19:58:04.676291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.427 [2024-07-25 19:58:04.676316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.427 [2024-07-25 19:58:04.676338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.427 [2024-07-25 19:58:04.676354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.427 [2024-07-25 19:58:04.676380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.427 [2024-07-25 19:58:04.676395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.676411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b48010 is same with the state(5) to be set 00:27:55.428 [2024-07-25 19:58:04.676490] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b48010 was disconnected and freed. reset controller. 00:27:55.428 [2024-07-25 19:58:04.677580] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:55.428 [2024-07-25 19:58:04.677660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1f00 (9): Bad file descriptor 00:27:55.428 [2024-07-25 19:58:04.678316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.428 [2024-07-25 19:58:04.678345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf1f00 with addr=10.0.0.2, port=4420 00:27:55.428 [2024-07-25 19:58:04.678372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1f00 is same with the state(5) to be set 00:27:55.428 [2024-07-25 19:58:04.678488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1f00 (9): Bad file descriptor 00:27:55.428 [2024-07-25 19:58:04.678531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5f90 (9): Bad file descriptor 00:27:55.428 [2024-07-25 19:58:04.678585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0fec0 is same with the state(5) to be set 00:27:55.428 [2024-07-25 19:58:04.678760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.428 [2024-07-25 19:58:04.678870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.678883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645610 is same with the state(5) to be set 00:27:55.428 [2024-07-25 19:58:04.678913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7bf90 (9): Bad file descriptor 00:27:55.428 [2024-07-25 19:58:04.678949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b706b0 (9): Bad file descriptor 00:27:55.428 [2024-07-25 19:58:04.679137] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:55.428 [2024-07-25 19:58:04.679160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:55.428 [2024-07-25 19:58:04.679175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:55.428 [2024-07-25 19:58:04.679217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.428 [2024-07-25 19:58:04.679899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.428 [2024-07-25 19:58:04.679912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.679927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.679940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.679955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.679968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.679984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.679996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.680978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.680992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.429 [2024-07-25 19:58:04.681007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.429 [2024-07-25 19:58:04.681020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.681313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.681328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c24ef0 is same with the state(5) to be set 00:27:55.430 [2024-07-25 19:58:04.682654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.430 [2024-07-25 19:58:04.682677] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:55.430 [2024-07-25 19:58:04.683024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.430 [2024-07-25 19:58:04.683057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4d300 with addr=10.0.0.2, port=4420 00:27:55.430 [2024-07-25 19:58:04.683085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4d300 is same with the state(5) to be set 00:27:55.430 [2024-07-25 19:58:04.683413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:55.430 [2024-07-25 19:58:04.683438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:55.430 [2024-07-25 19:58:04.683473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4d300 (9): Bad file descriptor 00:27:55.430 [2024-07-25 19:58:04.683680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.430 [2024-07-25 19:58:04.683708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b78810 with addr=10.0.0.2, port=4420 00:27:55.430 [2024-07-25 19:58:04.683724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78810 is same with the state(5) to be set 00:27:55.430 [2024-07-25 19:58:04.683828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.430 [2024-07-25 19:58:04.683853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4b190 with addr=10.0.0.2, port=4420 00:27:55.430 [2024-07-25 19:58:04.683868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4b190 is same with the state(5) to be set 00:27:55.430 [2024-07-25 19:58:04.683884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:55.430 [2024-07-25 19:58:04.683896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:55.430 [2024-07-25 19:58:04.683911] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:55.430 [2024-07-25 19:58:04.683972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.430 [2024-07-25 19:58:04.683996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78810 (9): Bad file descriptor 00:27:55.430 [2024-07-25 19:58:04.684016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4b190 (9): Bad file descriptor 00:27:55.430 [2024-07-25 19:58:04.684085] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:55.430 [2024-07-25 19:58:04.684109] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:55.430 [2024-07-25 19:58:04.684138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:55.430 [2024-07-25 19:58:04.684158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:55.430 [2024-07-25 19:58:04.684172] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:55.430 [2024-07-25 19:58:04.684185] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:55.430 [2024-07-25 19:58:04.684234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.430 [2024-07-25 19:58:04.684252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.430 [2024-07-25 19:58:04.687909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:55.430 [2024-07-25 19:58:04.688134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.430 [2024-07-25 19:58:04.688161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf1f00 with addr=10.0.0.2, port=4420 00:27:55.430 [2024-07-25 19:58:04.688178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1f00 is same with the state(5) to be set 00:27:55.430 [2024-07-25 19:58:04.688229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1f00 (9): Bad file descriptor 00:27:55.430 [2024-07-25 19:58:04.688286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:55.430 [2024-07-25 19:58:04.688305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:55.430 [2024-07-25 19:58:04.688318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:55.430 [2024-07-25 19:58:04.688383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.430 [2024-07-25 19:58:04.688560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0fec0 (9): Bad file descriptor 00:27:55.430 [2024-07-25 19:58:04.688595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1645610 (9): Bad file descriptor 00:27:55.430 [2024-07-25 19:58:04.688710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.688975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.689010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.430 [2024-07-25 19:58:04.689025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.430 [2024-07-25 19:58:04.689044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.689483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.431 [2024-07-25 19:58:04.689498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.431 [2024-07-25 19:58:04.698487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.431 [2024-07-25 19:58:04.698812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.698998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.699009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.699021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.699032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.699068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.699083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.699095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eac0 is same with the state(5) to be set 00:27:55.432 [2024-07-25 19:58:04.712084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.432 [2024-07-25 19:58:04.712919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.432 [2024-07-25 19:58:04.712935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.712949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.712965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.712979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.712996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.713433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.713450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca51f0 is same with the state(5) to be set 00:27:55.433 [2024-07-25 19:58:04.714810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.714834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.714857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.714873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.714891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.714905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.714922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.714937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.714953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.714969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.714985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.433 [2024-07-25 19:58:04.715511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.433 [2024-07-25 19:58:04.715527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.715978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.715994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.434 [2024-07-25 19:58:04.716678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.434 [2024-07-25 19:58:04.716693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.716709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.716723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.716739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.716774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.716788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.716805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.716819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.716834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca66f0 is same with the state(5) to be set 00:27:55.435 [2024-07-25 19:58:04.718169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.718972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.718986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.435 [2024-07-25 19:58:04.719279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.435 [2024-07-25 19:58:04.719295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.719979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.719996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.436 [2024-07-25 19:58:04.720239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.436 [2024-07-25 19:58:04.720254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b493e0 is same with the state(5) to be set 00:27:55.436 [2024-07-25 19:58:04.720329] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b493e0 was disconnected and freed. reset controller. 00:27:55.437 [2024-07-25 19:58:04.720388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.720974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.720989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.721588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.437 [2024-07-25 19:58:04.721602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.437 [2024-07-25 19:58:04.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.729976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.729990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.730020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.730037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.438 [2024-07-25 19:58:04.730052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.730076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d900 is same with the state(5) to be set 00:27:55.438 [2024-07-25 19:58:04.731783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:55.438 [2024-07-25 19:58:04.731820] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:55.438 [2024-07-25 19:58:04.731851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:55.438 [2024-07-25 19:58:04.732038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.438 [2024-07-25 19:58:04.732068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.732088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.438 [2024-07-25 19:58:04.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.732127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.438 [2024-07-25 19:58:04.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.732155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.438 [2024-07-25 19:58:04.732169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.438 [2024-07-25 19:58:04.732187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0f50 is same with the state(5) to be set 00:27:55.438 [2024-07-25 19:58:04.733457] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:55.438 [2024-07-25 19:58:04.733496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf0f50 (9): Bad file descriptor 00:27:55.438 [2024-07-25 19:58:04.733695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.438 [2024-07-25 19:58:04.733724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7bf90 with addr=10.0.0.2, port=4420 00:27:55.438 [2024-07-25 19:58:04.733741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7bf90 is same with the state(5) to be set 00:27:55.438 [2024-07-25 19:58:04.733864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.438 [2024-07-25 19:58:04.733893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b706b0 with addr=10.0.0.2, port=4420 00:27:55.438 [2024-07-25 19:58:04.733911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b706b0 is same with the state(5) to be set 00:27:55.438 [2024-07-25 19:58:04.734024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.438 [2024-07-25 19:58:04.734051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba5f90 with addr=10.0.0.2, port=4420 00:27:55.438 [2024-07-25 19:58:04.734075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5f90 is same with the state(5) to be set 00:27:55.438 [2024-07-25 19:58:04.734689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.734971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.734986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.439 [2024-07-25 19:58:04.735874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.439 [2024-07-25 19:58:04.735889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.735905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.735919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.735935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.735949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.735965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.735979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.735996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.736749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.736763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45630 is same with the state(5) to be set 00:27:55.440 [2024-07-25 19:58:04.738044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.440 [2024-07-25 19:58:04.738343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.440 [2024-07-25 19:58:04.738367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.738978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.738992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.441 [2024-07-25 19:58:04.739358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-07-25 19:58:04.739375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.739970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.739986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.740001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.740017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.740031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.740047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.442 [2024-07-25 19:58:04.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.442 [2024-07-25 19:58:04.740083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b46b10 is same with the state(5) to be set 00:27:55.442 [2024-07-25 19:58:04.742517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:55.442 [2024-07-25 19:58:04.742551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:55.442 [2024-07-25 19:58:04.742570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:55.442 [2024-07-25 19:58:04.742587] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:55.442 [2024-07-25 19:58:04.742605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:55.442 task offset: 31616 on job bdev=Nvme2n1 fails 00:27:55.442 00:27:55.442 Latency(us) 00:27:55.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.442 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme1n1 ended in about 0.90 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme1n1 : 0.90 142.86 8.93 71.43 0.00 295285.19 35535.08 267192.70 00:27:55.442 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme2n1 ended in about 0.88 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme2n1 : 0.88 222.85 13.93 72.40 0.00 209752.68 6699.24 237677.23 00:27:55.442 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme3n1 ended in about 0.89 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme3n1 : 0.89 216.92 13.56 72.31 0.00 209572.31 7718.68 260978.92 00:27:55.442 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme4n1 ended in about 0.93 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme4n1 : 0.93 206.89 12.93 68.96 0.00 215734.80 16019.91 254765.13 00:27:55.442 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme5n1 ended in about 0.93 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme5n1 : 0.93 137.43 8.59 68.72 0.00 282746.63 21942.42 250104.79 00:27:55.442 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme6n1 ended in about 0.95 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme6n1 : 0.95 138.76 8.67 67.28 0.00 277474.87 19612.25 264085.81 00:27:55.442 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme7n1 ended in about 0.95 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme7n1 : 0.95 134.09 8.38 67.05 0.00 278364.67 21942.42 256318.58 00:27:55.442 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme8n1 ended in about 0.89 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme8n1 : 0.89 212.14 13.26 3.37 0.00 250804.15 16990.81 256318.58 00:27:55.442 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme9n1 ended in about 0.95 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme9n1 : 0.95 135.20 8.45 67.60 0.00 263645.87 19903.53 288940.94 00:27:55.442 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.442 Job: Nvme10n1 ended in about 0.94 seconds with error 00:27:55.442 Verification LBA range: start 0x0 length 0x400 00:27:55.442 Nvme10n1 : 0.94 135.51 8.47 67.75 0.00 257123.11 22233.69 260978.92 00:27:55.442 =================================================================================================================== 00:27:55.443 Total : 1682.66 105.17 626.86 0.00 250155.26 6699.24 288940.94 00:27:55.443 [2024-07-25 19:58:04.770277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:55.443 [2024-07-25 19:58:04.770369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:55.443 [2024-07-25 19:58:04.770511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7bf90 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.770540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b706b0 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.770560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5f90 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.770628] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.443 [2024-07-25 19:58:04.770657] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.443 [2024-07-25 19:58:04.770677] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.443 [2024-07-25 19:58:04.771427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.771466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf0f50 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.771500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0f50 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.771660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.771686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4d300 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.771703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4d300 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.771817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.771842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4b190 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.771859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4b190 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.772088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.772117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b78810 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.772134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78810 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.772242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.772268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf1f00 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.772284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1f00 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.772385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.772410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0fec0 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.772427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0fec0 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.772524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.443 [2024-07-25 19:58:04.772549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1645610 with addr=10.0.0.2, port=4420 00:27:55.443 [2024-07-25 19:58:04.772565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645610 is same with the state(5) to be set 00:27:55.443 [2024-07-25 19:58:04.772581] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.772595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.772611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:55.443 [2024-07-25 19:58:04.772633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.772648] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.772661] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:55.443 [2024-07-25 19:58:04.772679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.772694] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.772707] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:55.443 [2024-07-25 19:58:04.772742] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.443 [2024-07-25 19:58:04.772768] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.443 [2024-07-25 19:58:04.772793] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.443 [2024-07-25 19:58:04.773393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.773418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.773432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.773449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf0f50 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.773468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4d300 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.773487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4b190 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.773504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78810 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.773522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1f00 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.773540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0fec0 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.773557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1645610 (9): Bad file descriptor 00:27:55.443 [2024-07-25 19:58:04.774002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774028] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774087] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774100] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774130] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774217] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774247] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774274] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774295] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:55.443 [2024-07-25 19:58:04.774310] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:55.443 [2024-07-25 19:58:04.774323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:55.443 [2024-07-25 19:58:04.774375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.774394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.774406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.774418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.774429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.774441] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.443 [2024-07-25 19:58:04.774463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.010 19:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:56.010 19:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4062590 00:27:56.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4062590) - No such process 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.941 rmmod nvme_tcp 00:27:56.941 rmmod nvme_fabrics 00:27:56.941 rmmod nvme_keyring 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.941 19:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.473 19:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.473 00:27:59.473 real 0m7.568s 00:27:59.473 user 0m18.441s 00:27:59.473 sys 0m1.576s 00:27:59.473 19:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:59.473 19:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:59.473 ************************************ 00:27:59.473 END TEST nvmf_shutdown_tc3 00:27:59.473 ************************************ 00:27:59.473 19:58:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:59.473 00:27:59.473 real 0m27.175s 00:27:59.473 user 1m16.006s 00:27:59.473 sys 0m6.285s 00:27:59.473 19:58:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:59.473 19:58:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.473 ************************************ 00:27:59.473 END TEST nvmf_shutdown 00:27:59.473 ************************************ 00:27:59.473 19:58:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.473 19:58:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.473 19:58:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:59.473 19:58:08 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:59.473 19:58:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.473 ************************************ 00:27:59.473 START TEST nvmf_multicontroller 00:27:59.473 ************************************ 00:27:59.473 19:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:59.474 * Looking for test storage... 00:27:59.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.474 19:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:01.377 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:01.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:01.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:01.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:01.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:28:01.378 00:28:01.378 --- 10.0.0.2 ping statistics --- 00:28:01.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.378 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:28:01.378 00:28:01.378 --- 10.0.0.1 ping statistics --- 00:28:01.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.378 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4065109 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4065109 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4065109 ']' 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.378 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.378 [2024-07-25 19:58:10.650082] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:01.378 [2024-07-25 19:58:10.650169] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.378 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.378 [2024-07-25 19:58:10.716185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:01.378 [2024-07-25 19:58:10.797671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.378 [2024-07-25 19:58:10.797718] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.378 [2024-07-25 19:58:10.797745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.378 [2024-07-25 19:58:10.797756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.378 [2024-07-25 19:58:10.797765] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.378 [2024-07-25 19:58:10.797832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.378 [2024-07-25 19:58:10.797891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.378 [2024-07-25 19:58:10.797893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 [2024-07-25 19:58:10.941607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 Malloc0 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.638 19:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 [2024-07-25 19:58:10.999304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.638 [2024-07-25 19:58:11.007179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:01.638 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.639 Malloc1 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.639 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4065137 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4065137 /var/tmp/bdevperf.sock 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4065137 ']' 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:01.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.898 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.157 NVMe0n1 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.157 1 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.157 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.415 request: 00:28:02.415 { 00:28:02.415 "name": "NVMe0", 00:28:02.415 "trtype": "tcp", 00:28:02.415 "traddr": "10.0.0.2", 00:28:02.415 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:02.415 "hostaddr": "10.0.0.2", 00:28:02.415 "hostsvcid": "60000", 00:28:02.415 "adrfam": "ipv4", 00:28:02.415 "trsvcid": "4420", 00:28:02.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.415 "method": "bdev_nvme_attach_controller", 00:28:02.415 "req_id": 1 00:28:02.415 } 00:28:02.415 Got JSON-RPC error response 00:28:02.415 response: 00:28:02.415 { 00:28:02.415 "code": -114, 00:28:02.415 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:02.415 } 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.415 request: 00:28:02.415 { 00:28:02.415 "name": "NVMe0", 00:28:02.415 "trtype": "tcp", 00:28:02.415 "traddr": "10.0.0.2", 00:28:02.415 "hostaddr": "10.0.0.2", 00:28:02.415 "hostsvcid": "60000", 00:28:02.415 "adrfam": "ipv4", 00:28:02.415 "trsvcid": "4420", 00:28:02.415 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:02.415 "method": "bdev_nvme_attach_controller", 00:28:02.415 "req_id": 1 00:28:02.415 } 00:28:02.415 Got JSON-RPC error response 00:28:02.415 response: 00:28:02.415 { 00:28:02.415 "code": -114, 00:28:02.415 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:02.415 } 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:02.415 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.416 request: 00:28:02.416 { 00:28:02.416 "name": "NVMe0", 00:28:02.416 "trtype": "tcp", 00:28:02.416 "traddr": "10.0.0.2", 00:28:02.416 "hostaddr": "10.0.0.2", 00:28:02.416 "hostsvcid": "60000", 00:28:02.416 "adrfam": "ipv4", 00:28:02.416 "trsvcid": "4420", 00:28:02.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.416 "multipath": "disable", 00:28:02.416 "method": "bdev_nvme_attach_controller", 00:28:02.416 "req_id": 1 00:28:02.416 } 00:28:02.416 Got JSON-RPC error response 00:28:02.416 response: 00:28:02.416 { 00:28:02.416 "code": -114, 00:28:02.416 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:02.416 } 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.416 request: 00:28:02.416 { 00:28:02.416 "name": "NVMe0", 00:28:02.416 "trtype": "tcp", 00:28:02.416 "traddr": "10.0.0.2", 00:28:02.416 "hostaddr": "10.0.0.2", 00:28:02.416 "hostsvcid": "60000", 00:28:02.416 "adrfam": "ipv4", 00:28:02.416 "trsvcid": "4420", 00:28:02.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.416 "multipath": "failover", 00:28:02.416 "method": "bdev_nvme_attach_controller", 00:28:02.416 "req_id": 1 00:28:02.416 } 00:28:02.416 Got JSON-RPC error response 00:28:02.416 response: 00:28:02.416 { 00:28:02.416 "code": -114, 00:28:02.416 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:02.416 } 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.416 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.416 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.674 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:02.674 19:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:03.612 0 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4065137 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4065137 ']' 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4065137 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.612 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4065137 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4065137' 00:28:03.871 killing process with pid 4065137 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4065137 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4065137 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.871 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:04.134 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:04.134 [2024-07-25 19:58:11.113214] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:04.134 [2024-07-25 19:58:11.113301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065137 ] 00:28:04.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.134 [2024-07-25 19:58:11.178833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.134 [2024-07-25 19:58:11.265497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.134 [2024-07-25 19:58:11.888281] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 4b6d9b6d-a09b-45ad-a841-ed0339fc33e6 already exists 00:28:04.134 [2024-07-25 19:58:11.888321] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:4b6d9b6d-a09b-45ad-a841-ed0339fc33e6 alias for bdev NVMe1n1 00:28:04.134 [2024-07-25 19:58:11.888339] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:04.134 Running I/O for 1 seconds... 00:28:04.134 00:28:04.134 Latency(us) 00:28:04.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.134 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:04.134 NVMe0n1 : 1.00 19344.08 75.56 0.00 0.00 6606.73 5752.60 15146.10 00:28:04.134 =================================================================================================================== 00:28:04.134 Total : 19344.08 75.56 0.00 0.00 6606.73 5752.60 15146.10 00:28:04.134 Received shutdown signal, test time was about 1.000000 seconds 00:28:04.134 00:28:04.134 Latency(us) 00:28:04.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.134 =================================================================================================================== 00:28:04.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.134 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.134 rmmod nvme_tcp 00:28:04.134 rmmod nvme_fabrics 00:28:04.134 rmmod nvme_keyring 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4065109 ']' 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4065109 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4065109 ']' 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4065109 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4065109 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4065109' 00:28:04.134 killing process with pid 4065109 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4065109 00:28:04.134 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4065109 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.394 19:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.298 19:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.298 00:28:06.298 real 0m7.260s 00:28:06.298 user 0m11.429s 00:28:06.298 sys 0m2.267s 00:28:06.298 19:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.298 19:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.298 ************************************ 00:28:06.298 END TEST nvmf_multicontroller 00:28:06.298 ************************************ 00:28:06.556 19:58:15 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:06.556 19:58:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:06.556 19:58:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.556 19:58:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.556 ************************************ 00:28:06.556 START TEST nvmf_aer 00:28:06.556 ************************************ 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:06.556 * Looking for test storage... 00:28:06.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.556 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.557 19:58:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.462 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:28:08.463 00:28:08.463 --- 10.0.0.2 ping statistics --- 00:28:08.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.463 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:08.463 00:28:08.463 --- 10.0.0.1 ping statistics --- 00:28:08.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.463 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4067338 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4067338 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 4067338 ']' 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:08.463 19:58:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.463 [2024-07-25 19:58:17.889273] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:08.463 [2024-07-25 19:58:17.889351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.722 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.722 [2024-07-25 19:58:17.955672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.722 [2024-07-25 19:58:18.044147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.722 [2024-07-25 19:58:18.044198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.722 [2024-07-25 19:58:18.044226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.722 [2024-07-25 19:58:18.044238] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.722 [2024-07-25 19:58:18.044249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.722 [2024-07-25 19:58:18.044324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.722 [2024-07-25 19:58:18.044363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.722 [2024-07-25 19:58:18.044429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.722 [2024-07-25 19:58:18.044431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 [2024-07-25 19:58:18.189902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 Malloc0 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 [2024-07-25 19:58:18.243705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.982 [ 00:28:08.982 { 00:28:08.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:08.982 "subtype": "Discovery", 00:28:08.982 "listen_addresses": [], 00:28:08.982 "allow_any_host": true, 00:28:08.982 "hosts": [] 00:28:08.982 }, 00:28:08.982 { 00:28:08.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.982 "subtype": "NVMe", 00:28:08.982 "listen_addresses": [ 00:28:08.982 { 00:28:08.982 "trtype": "TCP", 00:28:08.982 "adrfam": "IPv4", 00:28:08.982 "traddr": "10.0.0.2", 00:28:08.982 "trsvcid": "4420" 00:28:08.982 } 00:28:08.982 ], 00:28:08.982 "allow_any_host": true, 00:28:08.982 "hosts": [], 00:28:08.982 "serial_number": "SPDK00000000000001", 00:28:08.982 "model_number": "SPDK bdev Controller", 00:28:08.982 "max_namespaces": 2, 00:28:08.982 "min_cntlid": 1, 00:28:08.982 "max_cntlid": 65519, 00:28:08.982 "namespaces": [ 00:28:08.982 { 00:28:08.982 "nsid": 1, 00:28:08.982 "bdev_name": "Malloc0", 00:28:08.982 "name": "Malloc0", 00:28:08.982 "nguid": "BD1DCA3871054DA2B6EC037868B7E273", 00:28:08.982 "uuid": "bd1dca38-7105-4da2-b6ec-037868b7e273" 00:28:08.982 } 00:28:08.982 ] 00:28:08.982 } 00:28:08.982 ] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4067487 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:08.982 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:08.982 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.240 Malloc1 00:28:09.240 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.241 Asynchronous Event Request test 00:28:09.241 Attaching to 10.0.0.2 00:28:09.241 Attached to 10.0.0.2 00:28:09.241 Registering asynchronous event callbacks... 00:28:09.241 Starting namespace attribute notice tests for all controllers... 00:28:09.241 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:09.241 aer_cb - Changed Namespace 00:28:09.241 Cleaning up... 00:28:09.241 [ 00:28:09.241 { 00:28:09.241 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:09.241 "subtype": "Discovery", 00:28:09.241 "listen_addresses": [], 00:28:09.241 "allow_any_host": true, 00:28:09.241 "hosts": [] 00:28:09.241 }, 00:28:09.241 { 00:28:09.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.241 "subtype": "NVMe", 00:28:09.241 "listen_addresses": [ 00:28:09.241 { 00:28:09.241 "trtype": "TCP", 00:28:09.241 "adrfam": "IPv4", 00:28:09.241 "traddr": "10.0.0.2", 00:28:09.241 "trsvcid": "4420" 00:28:09.241 } 00:28:09.241 ], 00:28:09.241 "allow_any_host": true, 00:28:09.241 "hosts": [], 00:28:09.241 "serial_number": "SPDK00000000000001", 00:28:09.241 "model_number": "SPDK bdev Controller", 00:28:09.241 "max_namespaces": 2, 00:28:09.241 "min_cntlid": 1, 00:28:09.241 "max_cntlid": 65519, 00:28:09.241 "namespaces": [ 00:28:09.241 { 00:28:09.241 "nsid": 1, 00:28:09.241 "bdev_name": "Malloc0", 00:28:09.241 "name": "Malloc0", 00:28:09.241 "nguid": "BD1DCA3871054DA2B6EC037868B7E273", 00:28:09.241 "uuid": "bd1dca38-7105-4da2-b6ec-037868b7e273" 00:28:09.241 }, 00:28:09.241 { 00:28:09.241 "nsid": 2, 00:28:09.241 "bdev_name": "Malloc1", 00:28:09.241 "name": "Malloc1", 00:28:09.241 "nguid": "5425188260E942359B6E3C971E36B5E5", 00:28:09.241 "uuid": "54251882-60e9-4235-9b6e-3c971e36b5e5" 00:28:09.241 } 00:28:09.241 ] 00:28:09.241 } 00:28:09.241 ] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4067487 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.241 rmmod nvme_tcp 00:28:09.241 rmmod nvme_fabrics 00:28:09.241 rmmod nvme_keyring 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:09.241 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4067338 ']' 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4067338 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 4067338 ']' 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 4067338 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4067338 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4067338' 00:28:09.500 killing process with pid 4067338 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 4067338 00:28:09.500 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 4067338 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.761 19:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.663 19:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.663 00:28:11.663 real 0m5.217s 00:28:11.663 user 0m4.103s 00:28:11.663 sys 0m1.835s 00:28:11.663 19:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:11.663 19:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.663 ************************************ 00:28:11.663 END TEST nvmf_aer 00:28:11.663 ************************************ 00:28:11.663 19:58:21 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:11.663 19:58:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:11.663 19:58:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:11.663 19:58:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.663 ************************************ 00:28:11.663 START TEST nvmf_async_init 00:28:11.663 ************************************ 00:28:11.663 19:58:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:11.663 * Looking for test storage... 00:28:11.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c745aba25f964a40b71fd723eb7ec1aa 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.922 19:58:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:28:13.861 00:28:13.861 --- 10.0.0.2 ping statistics --- 00:28:13.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.861 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:28:13.861 00:28:13.861 --- 10.0.0.1 ping statistics --- 00:28:13.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.861 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.861 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4069423 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4069423 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 4069423 ']' 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:13.862 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.862 [2024-07-25 19:58:23.286289] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:13.862 [2024-07-25 19:58:23.286392] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.118 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.118 [2024-07-25 19:58:23.357279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.118 [2024-07-25 19:58:23.446401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.118 [2024-07-25 19:58:23.446469] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.118 [2024-07-25 19:58:23.446485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.118 [2024-07-25 19:58:23.446499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.118 [2024-07-25 19:58:23.446512] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.118 [2024-07-25 19:58:23.446550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.375 [2024-07-25 19:58:23.584841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.375 null0 00:28:14.375 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c745aba25f964a40b71fd723eb7ec1aa 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 [2024-07-25 19:58:23.625144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.376 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.633 nvme0n1 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.633 [ 00:28:14.633 { 00:28:14.633 "name": "nvme0n1", 00:28:14.633 "aliases": [ 00:28:14.633 "c745aba2-5f96-4a40-b71f-d723eb7ec1aa" 00:28:14.633 ], 00:28:14.633 "product_name": "NVMe disk", 00:28:14.633 "block_size": 512, 00:28:14.633 "num_blocks": 2097152, 00:28:14.633 "uuid": "c745aba2-5f96-4a40-b71f-d723eb7ec1aa", 00:28:14.633 "assigned_rate_limits": { 00:28:14.633 "rw_ios_per_sec": 0, 00:28:14.633 "rw_mbytes_per_sec": 0, 00:28:14.633 "r_mbytes_per_sec": 0, 00:28:14.633 "w_mbytes_per_sec": 0 00:28:14.633 }, 00:28:14.633 "claimed": false, 00:28:14.633 "zoned": false, 00:28:14.633 "supported_io_types": { 00:28:14.633 "read": true, 00:28:14.633 "write": true, 00:28:14.633 "unmap": false, 00:28:14.633 "write_zeroes": true, 00:28:14.633 "flush": true, 00:28:14.633 "reset": true, 00:28:14.633 "compare": true, 00:28:14.633 "compare_and_write": true, 00:28:14.633 "abort": true, 00:28:14.633 "nvme_admin": true, 00:28:14.633 "nvme_io": true 00:28:14.633 }, 00:28:14.633 "memory_domains": [ 00:28:14.633 { 00:28:14.633 "dma_device_id": "system", 00:28:14.633 "dma_device_type": 1 00:28:14.633 } 00:28:14.633 ], 00:28:14.633 "driver_specific": { 00:28:14.633 "nvme": [ 00:28:14.633 { 00:28:14.633 "trid": { 00:28:14.633 "trtype": "TCP", 00:28:14.633 "adrfam": "IPv4", 00:28:14.633 "traddr": "10.0.0.2", 00:28:14.633 "trsvcid": "4420", 00:28:14.633 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.633 }, 00:28:14.633 "ctrlr_data": { 00:28:14.633 "cntlid": 1, 00:28:14.633 "vendor_id": "0x8086", 00:28:14.633 "model_number": "SPDK bdev Controller", 00:28:14.633 "serial_number": "00000000000000000000", 00:28:14.633 "firmware_revision": "24.05.1", 00:28:14.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.633 "oacs": { 00:28:14.633 "security": 0, 00:28:14.633 "format": 0, 00:28:14.633 "firmware": 0, 00:28:14.633 "ns_manage": 0 00:28:14.633 }, 00:28:14.633 "multi_ctrlr": true, 00:28:14.633 "ana_reporting": false 00:28:14.633 }, 00:28:14.633 "vs": { 00:28:14.633 "nvme_version": "1.3" 00:28:14.633 }, 00:28:14.633 "ns_data": { 00:28:14.633 "id": 1, 00:28:14.633 "can_share": true 00:28:14.633 } 00:28:14.633 } 00:28:14.633 ], 00:28:14.633 "mp_policy": "active_passive" 00:28:14.633 } 00:28:14.633 } 00:28:14.633 ] 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.633 19:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.633 [2024-07-25 19:58:23.877735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.633 [2024-07-25 19:58:23.877824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb5760 (9): Bad file descriptor 00:28:14.633 [2024-07-25 19:58:24.020213] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.633 [ 00:28:14.633 { 00:28:14.633 "name": "nvme0n1", 00:28:14.633 "aliases": [ 00:28:14.633 "c745aba2-5f96-4a40-b71f-d723eb7ec1aa" 00:28:14.633 ], 00:28:14.633 "product_name": "NVMe disk", 00:28:14.633 "block_size": 512, 00:28:14.633 "num_blocks": 2097152, 00:28:14.633 "uuid": "c745aba2-5f96-4a40-b71f-d723eb7ec1aa", 00:28:14.633 "assigned_rate_limits": { 00:28:14.633 "rw_ios_per_sec": 0, 00:28:14.633 "rw_mbytes_per_sec": 0, 00:28:14.633 "r_mbytes_per_sec": 0, 00:28:14.633 "w_mbytes_per_sec": 0 00:28:14.633 }, 00:28:14.633 "claimed": false, 00:28:14.633 "zoned": false, 00:28:14.633 "supported_io_types": { 00:28:14.633 "read": true, 00:28:14.633 "write": true, 00:28:14.633 "unmap": false, 00:28:14.633 "write_zeroes": true, 00:28:14.633 "flush": true, 00:28:14.633 "reset": true, 00:28:14.633 "compare": true, 00:28:14.633 "compare_and_write": true, 00:28:14.633 "abort": true, 00:28:14.633 "nvme_admin": true, 00:28:14.633 "nvme_io": true 00:28:14.633 }, 00:28:14.633 "memory_domains": [ 00:28:14.633 { 00:28:14.633 "dma_device_id": "system", 00:28:14.633 "dma_device_type": 1 00:28:14.633 } 00:28:14.633 ], 00:28:14.633 "driver_specific": { 00:28:14.633 "nvme": [ 00:28:14.633 { 00:28:14.633 "trid": { 00:28:14.633 "trtype": "TCP", 00:28:14.633 "adrfam": "IPv4", 00:28:14.633 "traddr": "10.0.0.2", 00:28:14.633 "trsvcid": "4420", 00:28:14.633 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.633 }, 00:28:14.633 "ctrlr_data": { 00:28:14.633 "cntlid": 2, 00:28:14.633 "vendor_id": "0x8086", 00:28:14.633 "model_number": "SPDK bdev Controller", 00:28:14.633 "serial_number": "00000000000000000000", 00:28:14.633 "firmware_revision": "24.05.1", 00:28:14.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.633 "oacs": { 00:28:14.633 "security": 0, 00:28:14.633 "format": 0, 00:28:14.633 "firmware": 0, 00:28:14.633 "ns_manage": 0 00:28:14.633 }, 00:28:14.633 "multi_ctrlr": true, 00:28:14.633 "ana_reporting": false 00:28:14.633 }, 00:28:14.633 "vs": { 00:28:14.633 "nvme_version": "1.3" 00:28:14.633 }, 00:28:14.633 "ns_data": { 00:28:14.633 "id": 1, 00:28:14.633 "can_share": true 00:28:14.633 } 00:28:14.633 } 00:28:14.633 ], 00:28:14.633 "mp_policy": "active_passive" 00:28:14.633 } 00:28:14.633 } 00:28:14.633 ] 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.J7kj80Edlf 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.J7kj80Edlf 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.633 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.893 [2024-07-25 19:58:24.070420] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:14.893 [2024-07-25 19:58:24.070568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.J7kj80Edlf 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.893 [2024-07-25 19:58:24.078442] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.J7kj80Edlf 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.893 [2024-07-25 19:58:24.086454] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:14.893 [2024-07-25 19:58:24.086515] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:14.893 nvme0n1 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.893 [ 00:28:14.893 { 00:28:14.893 "name": "nvme0n1", 00:28:14.893 "aliases": [ 00:28:14.893 "c745aba2-5f96-4a40-b71f-d723eb7ec1aa" 00:28:14.893 ], 00:28:14.893 "product_name": "NVMe disk", 00:28:14.893 "block_size": 512, 00:28:14.893 "num_blocks": 2097152, 00:28:14.893 "uuid": "c745aba2-5f96-4a40-b71f-d723eb7ec1aa", 00:28:14.893 "assigned_rate_limits": { 00:28:14.893 "rw_ios_per_sec": 0, 00:28:14.893 "rw_mbytes_per_sec": 0, 00:28:14.893 "r_mbytes_per_sec": 0, 00:28:14.893 "w_mbytes_per_sec": 0 00:28:14.893 }, 00:28:14.893 "claimed": false, 00:28:14.893 "zoned": false, 00:28:14.893 "supported_io_types": { 00:28:14.893 "read": true, 00:28:14.893 "write": true, 00:28:14.893 "unmap": false, 00:28:14.893 "write_zeroes": true, 00:28:14.893 "flush": true, 00:28:14.893 "reset": true, 00:28:14.893 "compare": true, 00:28:14.893 "compare_and_write": true, 00:28:14.893 "abort": true, 00:28:14.893 "nvme_admin": true, 00:28:14.893 "nvme_io": true 00:28:14.893 }, 00:28:14.893 "memory_domains": [ 00:28:14.893 { 00:28:14.893 "dma_device_id": "system", 00:28:14.893 "dma_device_type": 1 00:28:14.893 } 00:28:14.893 ], 00:28:14.893 "driver_specific": { 00:28:14.893 "nvme": [ 00:28:14.893 { 00:28:14.893 "trid": { 00:28:14.893 "trtype": "TCP", 00:28:14.893 "adrfam": "IPv4", 00:28:14.893 "traddr": "10.0.0.2", 00:28:14.893 "trsvcid": "4421", 00:28:14.893 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.893 }, 00:28:14.893 "ctrlr_data": { 00:28:14.893 "cntlid": 3, 00:28:14.893 "vendor_id": "0x8086", 00:28:14.893 "model_number": "SPDK bdev Controller", 00:28:14.893 "serial_number": "00000000000000000000", 00:28:14.893 "firmware_revision": "24.05.1", 00:28:14.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.893 "oacs": { 00:28:14.893 "security": 0, 00:28:14.893 "format": 0, 00:28:14.893 "firmware": 0, 00:28:14.893 "ns_manage": 0 00:28:14.893 }, 00:28:14.893 "multi_ctrlr": true, 00:28:14.893 "ana_reporting": false 00:28:14.893 }, 00:28:14.893 "vs": { 00:28:14.893 "nvme_version": "1.3" 00:28:14.893 }, 00:28:14.893 "ns_data": { 00:28:14.893 "id": 1, 00:28:14.893 "can_share": true 00:28:14.893 } 00:28:14.893 } 00:28:14.893 ], 00:28:14.893 "mp_policy": "active_passive" 00:28:14.893 } 00:28:14.893 } 00:28:14.893 ] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.J7kj80Edlf 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:14.893 rmmod nvme_tcp 00:28:14.893 rmmod nvme_fabrics 00:28:14.893 rmmod nvme_keyring 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4069423 ']' 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4069423 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 4069423 ']' 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 4069423 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4069423 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4069423' 00:28:14.893 killing process with pid 4069423 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 4069423 00:28:14.893 [2024-07-25 19:58:24.262211] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:14.893 [2024-07-25 19:58:24.262248] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:14.893 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 4069423 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.152 19:58:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.683 19:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.683 00:28:17.683 real 0m5.492s 00:28:17.683 user 0m2.076s 00:28:17.683 sys 0m1.796s 00:28:17.683 19:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.683 19:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.683 ************************************ 00:28:17.683 END TEST nvmf_async_init 00:28:17.683 ************************************ 00:28:17.683 19:58:26 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:17.683 19:58:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:17.683 19:58:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:17.683 19:58:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.683 ************************************ 00:28:17.683 START TEST dma 00:28:17.683 ************************************ 00:28:17.683 19:58:26 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:17.683 * Looking for test storage... 00:28:17.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.683 19:58:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.683 19:58:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.683 19:58:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.683 19:58:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.683 19:58:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.683 19:58:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.683 19:58:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.683 19:58:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:17.683 19:58:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.683 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.684 19:58:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.684 19:58:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:17.684 19:58:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:17.684 00:28:17.684 real 0m0.070s 00:28:17.684 user 0m0.028s 00:28:17.684 sys 0m0.047s 00:28:17.684 19:58:26 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.684 19:58:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:17.684 ************************************ 00:28:17.684 END TEST dma 00:28:17.684 ************************************ 00:28:17.684 19:58:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:17.684 19:58:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:17.684 19:58:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:17.684 19:58:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.684 ************************************ 00:28:17.684 START TEST nvmf_identify 00:28:17.684 ************************************ 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:17.684 * Looking for test storage... 00:28:17.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.684 19:58:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.584 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:28:19.585 00:28:19.585 --- 10.0.0.2 ping statistics --- 00:28:19.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.585 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:28:19.585 00:28:19.585 --- 10.0.0.1 ping statistics --- 00:28:19.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.585 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4071542 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4071542 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 4071542 ']' 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.585 19:58:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.585 [2024-07-25 19:58:28.773412] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:19.585 [2024-07-25 19:58:28.773506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.585 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.585 [2024-07-25 19:58:28.843447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.585 [2024-07-25 19:58:28.935305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.585 [2024-07-25 19:58:28.935364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.585 [2024-07-25 19:58:28.935389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.585 [2024-07-25 19:58:28.935402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.585 [2024-07-25 19:58:28.935414] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.585 [2024-07-25 19:58:28.935497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.585 [2024-07-25 19:58:28.935567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.585 [2024-07-25 19:58:28.935659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.586 [2024-07-25 19:58:28.935661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 [2024-07-25 19:58:29.071874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 Malloc0 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 [2024-07-25 19:58:29.153385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.845 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.845 [ 00:28:19.845 { 00:28:19.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:19.845 "subtype": "Discovery", 00:28:19.845 "listen_addresses": [ 00:28:19.845 { 00:28:19.845 "trtype": "TCP", 00:28:19.845 "adrfam": "IPv4", 00:28:19.845 "traddr": "10.0.0.2", 00:28:19.845 "trsvcid": "4420" 00:28:19.845 } 00:28:19.845 ], 00:28:19.845 "allow_any_host": true, 00:28:19.845 "hosts": [] 00:28:19.845 }, 00:28:19.845 { 00:28:19.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.845 "subtype": "NVMe", 00:28:19.845 "listen_addresses": [ 00:28:19.845 { 00:28:19.845 "trtype": "TCP", 00:28:19.845 "adrfam": "IPv4", 00:28:19.846 "traddr": "10.0.0.2", 00:28:19.846 "trsvcid": "4420" 00:28:19.846 } 00:28:19.846 ], 00:28:19.846 "allow_any_host": true, 00:28:19.846 "hosts": [], 00:28:19.846 "serial_number": "SPDK00000000000001", 00:28:19.846 "model_number": "SPDK bdev Controller", 00:28:19.846 "max_namespaces": 32, 00:28:19.846 "min_cntlid": 1, 00:28:19.846 "max_cntlid": 65519, 00:28:19.846 "namespaces": [ 00:28:19.846 { 00:28:19.846 "nsid": 1, 00:28:19.846 "bdev_name": "Malloc0", 00:28:19.846 "name": "Malloc0", 00:28:19.846 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:19.846 "eui64": "ABCDEF0123456789", 00:28:19.846 "uuid": "128a1300-fcea-4d1e-8048-1b4f939dd82d" 00:28:19.846 } 00:28:19.846 ] 00:28:19.846 } 00:28:19.846 ] 00:28:19.846 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.846 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:19.846 [2024-07-25 19:58:29.194794] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:19.846 [2024-07-25 19:58:29.194839] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071570 ] 00:28:19.846 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.846 [2024-07-25 19:58:29.231306] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:19.846 [2024-07-25 19:58:29.231395] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:19.846 [2024-07-25 19:58:29.231406] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:19.846 [2024-07-25 19:58:29.231421] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:19.846 [2024-07-25 19:58:29.231434] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:19.846 [2024-07-25 19:58:29.231706] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:19.846 [2024-07-25 19:58:29.231761] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe1a980 0 00:28:19.846 [2024-07-25 19:58:29.238079] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:19.846 [2024-07-25 19:58:29.238099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:19.846 [2024-07-25 19:58:29.238107] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:19.846 [2024-07-25 19:58:29.238113] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:19.846 [2024-07-25 19:58:29.238178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.238191] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.238198] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.238216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:19.846 [2024-07-25 19:58:29.238242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.846 [2024-07-25 19:58:29.246075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.846 [2024-07-25 19:58:29.246093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.846 [2024-07-25 19:58:29.246100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246107] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.846 [2024-07-25 19:58:29.246122] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:19.846 [2024-07-25 19:58:29.246147] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:19.846 [2024-07-25 19:58:29.246156] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:19.846 [2024-07-25 19:58:29.246177] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246186] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246193] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.246205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.846 [2024-07-25 19:58:29.246229] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.846 [2024-07-25 19:58:29.246345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.846 [2024-07-25 19:58:29.246357] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.846 [2024-07-25 19:58:29.246368] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246376] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.846 [2024-07-25 19:58:29.246389] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:19.846 [2024-07-25 19:58:29.246402] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:19.846 [2024-07-25 19:58:29.246415] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246423] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.246440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.846 [2024-07-25 19:58:29.246461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.846 [2024-07-25 19:58:29.246560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.846 [2024-07-25 19:58:29.246573] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.846 [2024-07-25 19:58:29.246579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246586] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.846 [2024-07-25 19:58:29.246595] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:19.846 [2024-07-25 19:58:29.246609] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:19.846 [2024-07-25 19:58:29.246621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246628] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.246646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.846 [2024-07-25 19:58:29.246666] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.846 [2024-07-25 19:58:29.246771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.846 [2024-07-25 19:58:29.246786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.846 [2024-07-25 19:58:29.246793] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246800] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.846 [2024-07-25 19:58:29.246809] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:19.846 [2024-07-25 19:58:29.246826] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246835] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.246852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.846 [2024-07-25 19:58:29.246873] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.846 [2024-07-25 19:58:29.246968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.846 [2024-07-25 19:58:29.246983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.846 [2024-07-25 19:58:29.246990] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.246997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.846 [2024-07-25 19:58:29.247009] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:19.846 [2024-07-25 19:58:29.247019] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:19.846 [2024-07-25 19:58:29.247032] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:19.846 [2024-07-25 19:58:29.247142] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:19.846 [2024-07-25 19:58:29.247152] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:19.846 [2024-07-25 19:58:29.247166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.247173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.247180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.247190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.846 [2024-07-25 19:58:29.247212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.846 [2024-07-25 19:58:29.247313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.846 [2024-07-25 19:58:29.247325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.846 [2024-07-25 19:58:29.247332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.247339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.846 [2024-07-25 19:58:29.247347] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:19.846 [2024-07-25 19:58:29.247363] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.247372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.846 [2024-07-25 19:58:29.247379] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.846 [2024-07-25 19:58:29.247389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.846 [2024-07-25 19:58:29.247409] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.847 [2024-07-25 19:58:29.247505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.847 [2024-07-25 19:58:29.247517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.847 [2024-07-25 19:58:29.247524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.847 [2024-07-25 19:58:29.247531] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:19.847 [2024-07-25 19:58:29.247539] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:19.847 [2024-07-25 19:58:29.247547] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:19.847 [2024-07-25 19:58:29.247560] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:19.847 [2024-07-25 19:58:29.247575] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:19.847 [2024-07-25 19:58:29.247591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.847 [2024-07-25 19:58:29.247600] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:19.847 [2024-07-25 19:58:29.247611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.847 [2024-07-25 19:58:29.247636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:19.847 [2024-07-25 19:58:29.247772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.847 [2024-07-25 19:58:29.247784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.847 [2024-07-25 19:58:29.247791] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.847 [2024-07-25 19:58:29.247798] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1a980): datao=0, datal=4096, cccid=0 00:28:19.847 [2024-07-25 19:58:29.247806] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe824c0) on tqpair(0xe1a980): expected_datao=0, payload_size=4096 00:28:19.847 [2024-07-25 19:58:29.247814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.847 [2024-07-25 19:58:29.247831] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.847 [2024-07-25 19:58:29.247840] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.110 [2024-07-25 19:58:29.288171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.110 [2024-07-25 19:58:29.288178] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288186] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:20.110 [2024-07-25 19:58:29.288204] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:20.110 [2024-07-25 19:58:29.288214] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:20.110 [2024-07-25 19:58:29.288223] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:20.110 [2024-07-25 19:58:29.288231] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:20.110 [2024-07-25 19:58:29.288240] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:20.110 [2024-07-25 19:58:29.288250] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:20.110 [2024-07-25 19:58:29.288266] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:20.110 [2024-07-25 19:58:29.288280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:20.110 [2024-07-25 19:58:29.288330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:20.110 [2024-07-25 19:58:29.288442] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.110 [2024-07-25 19:58:29.288454] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.110 [2024-07-25 19:58:29.288461] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288468] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe824c0) on tqpair=0xe1a980 00:28:20.110 [2024-07-25 19:58:29.288481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288489] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.110 [2024-07-25 19:58:29.288517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288529] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.110 [2024-07-25 19:58:29.288555] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.110 [2024-07-25 19:58:29.288587] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288594] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.110 [2024-07-25 19:58:29.288634] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:20.110 [2024-07-25 19:58:29.288653] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:20.110 [2024-07-25 19:58:29.288666] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.110 [2024-07-25 19:58:29.288704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe824c0, cid 0, qid 0 00:28:20.110 [2024-07-25 19:58:29.288731] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82620, cid 1, qid 0 00:28:20.110 [2024-07-25 19:58:29.288739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82780, cid 2, qid 0 00:28:20.110 [2024-07-25 19:58:29.288747] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.110 [2024-07-25 19:58:29.288756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82a40, cid 4, qid 0 00:28:20.110 [2024-07-25 19:58:29.288880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.110 [2024-07-25 19:58:29.288892] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.110 [2024-07-25 19:58:29.288899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288906] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe82a40) on tqpair=0xe1a980 00:28:20.110 [2024-07-25 19:58:29.288915] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:20.110 [2024-07-25 19:58:29.288924] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:20.110 [2024-07-25 19:58:29.288941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.288951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.288961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.110 [2024-07-25 19:58:29.288982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82a40, cid 4, qid 0 00:28:20.110 [2024-07-25 19:58:29.289096] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.110 [2024-07-25 19:58:29.289116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.110 [2024-07-25 19:58:29.289124] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289131] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1a980): datao=0, datal=4096, cccid=4 00:28:20.110 [2024-07-25 19:58:29.289140] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82a40) on tqpair(0xe1a980): expected_datao=0, payload_size=4096 00:28:20.110 [2024-07-25 19:58:29.289149] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289166] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289175] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.110 [2024-07-25 19:58:29.289222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.110 [2024-07-25 19:58:29.289229] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe82a40) on tqpair=0xe1a980 00:28:20.110 [2024-07-25 19:58:29.289255] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:20.110 [2024-07-25 19:58:29.289292] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289302] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.289313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.110 [2024-07-25 19:58:29.289325] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1a980) 00:28:20.110 [2024-07-25 19:58:29.289348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.110 [2024-07-25 19:58:29.289384] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82a40, cid 4, qid 0 00:28:20.110 [2024-07-25 19:58:29.289396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82ba0, cid 5, qid 0 00:28:20.110 [2024-07-25 19:58:29.289547] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.110 [2024-07-25 19:58:29.289562] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.110 [2024-07-25 19:58:29.289569] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289576] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1a980): datao=0, datal=1024, cccid=4 00:28:20.110 [2024-07-25 19:58:29.289583] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82a40) on tqpair(0xe1a980): expected_datao=0, payload_size=1024 00:28:20.110 [2024-07-25 19:58:29.289591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289601] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289608] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289616] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.110 [2024-07-25 19:58:29.289625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.110 [2024-07-25 19:58:29.289632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.289639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe82ba0) on tqpair=0xe1a980 00:28:20.110 [2024-07-25 19:58:29.334077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.110 [2024-07-25 19:58:29.334095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.110 [2024-07-25 19:58:29.334103] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.110 [2024-07-25 19:58:29.334110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe82a40) on tqpair=0xe1a980 00:28:20.111 [2024-07-25 19:58:29.334131] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1a980) 00:28:20.111 [2024-07-25 19:58:29.334152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.111 [2024-07-25 19:58:29.334195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82a40, cid 4, qid 0 00:28:20.111 [2024-07-25 19:58:29.334316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.111 [2024-07-25 19:58:29.334331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.111 [2024-07-25 19:58:29.334338] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334345] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1a980): datao=0, datal=3072, cccid=4 00:28:20.111 [2024-07-25 19:58:29.334352] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82a40) on tqpair(0xe1a980): expected_datao=0, payload_size=3072 00:28:20.111 [2024-07-25 19:58:29.334360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334377] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334384] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.111 [2024-07-25 19:58:29.334407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.111 [2024-07-25 19:58:29.334415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334423] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe82a40) on tqpair=0xe1a980 00:28:20.111 [2024-07-25 19:58:29.334439] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334448] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1a980) 00:28:20.111 [2024-07-25 19:58:29.334459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.111 [2024-07-25 19:58:29.334487] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82a40, cid 4, qid 0 00:28:20.111 [2024-07-25 19:58:29.334600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.111 [2024-07-25 19:58:29.334615] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.111 [2024-07-25 19:58:29.334622] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334629] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1a980): datao=0, datal=8, cccid=4 00:28:20.111 [2024-07-25 19:58:29.334636] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82a40) on tqpair(0xe1a980): expected_datao=0, payload_size=8 00:28:20.111 [2024-07-25 19:58:29.334645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334656] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.334663] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.375146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.111 [2024-07-25 19:58:29.375165] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.111 [2024-07-25 19:58:29.375172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.111 [2024-07-25 19:58:29.375179] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe82a40) on tqpair=0xe1a980 00:28:20.111 ===================================================== 00:28:20.111 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:20.111 ===================================================== 00:28:20.111 Controller Capabilities/Features 00:28:20.111 ================================ 00:28:20.111 Vendor ID: 0000 00:28:20.111 Subsystem Vendor ID: 0000 00:28:20.111 Serial Number: .................... 00:28:20.111 Model Number: ........................................ 00:28:20.111 Firmware Version: 24.05.1 00:28:20.111 Recommended Arb Burst: 0 00:28:20.111 IEEE OUI Identifier: 00 00 00 00:28:20.111 Multi-path I/O 00:28:20.111 May have multiple subsystem ports: No 00:28:20.111 May have multiple controllers: No 00:28:20.111 Associated with SR-IOV VF: No 00:28:20.111 Max Data Transfer Size: 131072 00:28:20.111 Max Number of Namespaces: 0 00:28:20.111 Max Number of I/O Queues: 1024 00:28:20.111 NVMe Specification Version (VS): 1.3 00:28:20.111 NVMe Specification Version (Identify): 1.3 00:28:20.111 Maximum Queue Entries: 128 00:28:20.111 Contiguous Queues Required: Yes 00:28:20.111 Arbitration Mechanisms Supported 00:28:20.111 Weighted Round Robin: Not Supported 00:28:20.111 Vendor Specific: Not Supported 00:28:20.111 Reset Timeout: 15000 ms 00:28:20.111 Doorbell Stride: 4 bytes 00:28:20.111 NVM Subsystem Reset: Not Supported 00:28:20.111 Command Sets Supported 00:28:20.111 NVM Command Set: Supported 00:28:20.111 Boot Partition: Not Supported 00:28:20.111 Memory Page Size Minimum: 4096 bytes 00:28:20.111 Memory Page Size Maximum: 4096 bytes 00:28:20.111 Persistent Memory Region: Not Supported 00:28:20.111 Optional Asynchronous Events Supported 00:28:20.111 Namespace Attribute Notices: Not Supported 00:28:20.111 Firmware Activation Notices: Not Supported 00:28:20.111 ANA Change Notices: Not Supported 00:28:20.111 PLE Aggregate Log Change Notices: Not Supported 00:28:20.111 LBA Status Info Alert Notices: Not Supported 00:28:20.111 EGE Aggregate Log Change Notices: Not Supported 00:28:20.111 Normal NVM Subsystem Shutdown event: Not Supported 00:28:20.111 Zone Descriptor Change Notices: Not Supported 00:28:20.111 Discovery Log Change Notices: Supported 00:28:20.111 Controller Attributes 00:28:20.111 128-bit Host Identifier: Not Supported 00:28:20.111 Non-Operational Permissive Mode: Not Supported 00:28:20.111 NVM Sets: Not Supported 00:28:20.111 Read Recovery Levels: Not Supported 00:28:20.111 Endurance Groups: Not Supported 00:28:20.111 Predictable Latency Mode: Not Supported 00:28:20.111 Traffic Based Keep ALive: Not Supported 00:28:20.111 Namespace Granularity: Not Supported 00:28:20.111 SQ Associations: Not Supported 00:28:20.111 UUID List: Not Supported 00:28:20.111 Multi-Domain Subsystem: Not Supported 00:28:20.111 Fixed Capacity Management: Not Supported 00:28:20.111 Variable Capacity Management: Not Supported 00:28:20.111 Delete Endurance Group: Not Supported 00:28:20.111 Delete NVM Set: Not Supported 00:28:20.111 Extended LBA Formats Supported: Not Supported 00:28:20.111 Flexible Data Placement Supported: Not Supported 00:28:20.111 00:28:20.111 Controller Memory Buffer Support 00:28:20.111 ================================ 00:28:20.111 Supported: No 00:28:20.111 00:28:20.111 Persistent Memory Region Support 00:28:20.111 ================================ 00:28:20.111 Supported: No 00:28:20.111 00:28:20.111 Admin Command Set Attributes 00:28:20.111 ============================ 00:28:20.111 Security Send/Receive: Not Supported 00:28:20.111 Format NVM: Not Supported 00:28:20.111 Firmware Activate/Download: Not Supported 00:28:20.111 Namespace Management: Not Supported 00:28:20.111 Device Self-Test: Not Supported 00:28:20.111 Directives: Not Supported 00:28:20.111 NVMe-MI: Not Supported 00:28:20.111 Virtualization Management: Not Supported 00:28:20.111 Doorbell Buffer Config: Not Supported 00:28:20.111 Get LBA Status Capability: Not Supported 00:28:20.111 Command & Feature Lockdown Capability: Not Supported 00:28:20.111 Abort Command Limit: 1 00:28:20.111 Async Event Request Limit: 4 00:28:20.111 Number of Firmware Slots: N/A 00:28:20.111 Firmware Slot 1 Read-Only: N/A 00:28:20.111 Firmware Activation Without Reset: N/A 00:28:20.111 Multiple Update Detection Support: N/A 00:28:20.111 Firmware Update Granularity: No Information Provided 00:28:20.111 Per-Namespace SMART Log: No 00:28:20.111 Asymmetric Namespace Access Log Page: Not Supported 00:28:20.111 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:20.111 Command Effects Log Page: Not Supported 00:28:20.111 Get Log Page Extended Data: Supported 00:28:20.111 Telemetry Log Pages: Not Supported 00:28:20.111 Persistent Event Log Pages: Not Supported 00:28:20.111 Supported Log Pages Log Page: May Support 00:28:20.111 Commands Supported & Effects Log Page: Not Supported 00:28:20.111 Feature Identifiers & Effects Log Page:May Support 00:28:20.111 NVMe-MI Commands & Effects Log Page: May Support 00:28:20.111 Data Area 4 for Telemetry Log: Not Supported 00:28:20.111 Error Log Page Entries Supported: 128 00:28:20.111 Keep Alive: Not Supported 00:28:20.111 00:28:20.111 NVM Command Set Attributes 00:28:20.111 ========================== 00:28:20.111 Submission Queue Entry Size 00:28:20.111 Max: 1 00:28:20.111 Min: 1 00:28:20.111 Completion Queue Entry Size 00:28:20.111 Max: 1 00:28:20.111 Min: 1 00:28:20.111 Number of Namespaces: 0 00:28:20.111 Compare Command: Not Supported 00:28:20.111 Write Uncorrectable Command: Not Supported 00:28:20.111 Dataset Management Command: Not Supported 00:28:20.111 Write Zeroes Command: Not Supported 00:28:20.111 Set Features Save Field: Not Supported 00:28:20.111 Reservations: Not Supported 00:28:20.111 Timestamp: Not Supported 00:28:20.111 Copy: Not Supported 00:28:20.111 Volatile Write Cache: Not Present 00:28:20.111 Atomic Write Unit (Normal): 1 00:28:20.111 Atomic Write Unit (PFail): 1 00:28:20.111 Atomic Compare & Write Unit: 1 00:28:20.111 Fused Compare & Write: Supported 00:28:20.111 Scatter-Gather List 00:28:20.112 SGL Command Set: Supported 00:28:20.112 SGL Keyed: Supported 00:28:20.112 SGL Bit Bucket Descriptor: Not Supported 00:28:20.112 SGL Metadata Pointer: Not Supported 00:28:20.112 Oversized SGL: Not Supported 00:28:20.112 SGL Metadata Address: Not Supported 00:28:20.112 SGL Offset: Supported 00:28:20.112 Transport SGL Data Block: Not Supported 00:28:20.112 Replay Protected Memory Block: Not Supported 00:28:20.112 00:28:20.112 Firmware Slot Information 00:28:20.112 ========================= 00:28:20.112 Active slot: 0 00:28:20.112 00:28:20.112 00:28:20.112 Error Log 00:28:20.112 ========= 00:28:20.112 00:28:20.112 Active Namespaces 00:28:20.112 ================= 00:28:20.112 Discovery Log Page 00:28:20.112 ================== 00:28:20.112 Generation Counter: 2 00:28:20.112 Number of Records: 2 00:28:20.112 Record Format: 0 00:28:20.112 00:28:20.112 Discovery Log Entry 0 00:28:20.112 ---------------------- 00:28:20.112 Transport Type: 3 (TCP) 00:28:20.112 Address Family: 1 (IPv4) 00:28:20.112 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:20.112 Entry Flags: 00:28:20.112 Duplicate Returned Information: 1 00:28:20.112 Explicit Persistent Connection Support for Discovery: 1 00:28:20.112 Transport Requirements: 00:28:20.112 Secure Channel: Not Required 00:28:20.112 Port ID: 0 (0x0000) 00:28:20.112 Controller ID: 65535 (0xffff) 00:28:20.112 Admin Max SQ Size: 128 00:28:20.112 Transport Service Identifier: 4420 00:28:20.112 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:20.112 Transport Address: 10.0.0.2 00:28:20.112 Discovery Log Entry 1 00:28:20.112 ---------------------- 00:28:20.112 Transport Type: 3 (TCP) 00:28:20.112 Address Family: 1 (IPv4) 00:28:20.112 Subsystem Type: 2 (NVM Subsystem) 00:28:20.112 Entry Flags: 00:28:20.112 Duplicate Returned Information: 0 00:28:20.112 Explicit Persistent Connection Support for Discovery: 0 00:28:20.112 Transport Requirements: 00:28:20.112 Secure Channel: Not Required 00:28:20.112 Port ID: 0 (0x0000) 00:28:20.112 Controller ID: 65535 (0xffff) 00:28:20.112 Admin Max SQ Size: 128 00:28:20.112 Transport Service Identifier: 4420 00:28:20.112 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:20.112 Transport Address: 10.0.0.2 [2024-07-25 19:58:29.375289] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:20.112 [2024-07-25 19:58:29.375313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.112 [2024-07-25 19:58:29.375325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.112 [2024-07-25 19:58:29.375338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.112 [2024-07-25 19:58:29.375347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.112 [2024-07-25 19:58:29.375374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375389] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.375400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.375424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.375514] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.375529] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.375536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375543] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.375555] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.375580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.375606] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.375723] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.375735] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.375742] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375749] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.375758] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:20.112 [2024-07-25 19:58:29.375766] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:20.112 [2024-07-25 19:58:29.375782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375797] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.375808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.375828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.375936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.375950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.375957] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.375964] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.375991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.376018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.376042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.376160] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.376175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.376182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376189] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.376206] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376221] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.376232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.376253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.376352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.376369] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.376376] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376383] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.376399] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376409] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.376426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.376447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.376541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.376553] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.376560] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376567] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.376583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376593] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376600] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.112 [2024-07-25 19:58:29.376610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.112 [2024-07-25 19:58:29.376631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.112 [2024-07-25 19:58:29.376741] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.112 [2024-07-25 19:58:29.376755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.112 [2024-07-25 19:58:29.376762] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.112 [2024-07-25 19:58:29.376786] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.112 [2024-07-25 19:58:29.376795] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.376802] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.376812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.376833] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.376924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.376936] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.376943] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.376950] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.376966] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.376975] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.376981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.376992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.377013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.377130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.377145] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.377152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377159] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.377176] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.377202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.377223] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.377318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.377330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.377336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377343] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.377359] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.377396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.377418] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.377528] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.377543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.377550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.377573] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.377600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.377622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.377724] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.377743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.377751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.377774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377790] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.377801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.377821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.377934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.377946] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.377953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377960] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.377976] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.377992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.378002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.378023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.382075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.382091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.382098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.382105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.382136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.382146] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.382153] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1a980) 00:28:20.113 [2024-07-25 19:58:29.382164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.113 [2024-07-25 19:58:29.382186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828e0, cid 3, qid 0 00:28:20.113 [2024-07-25 19:58:29.382296] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.382311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.382318] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.382324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe828e0) on tqpair=0xe1a980 00:28:20.113 [2024-07-25 19:58:29.382338] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:20.113 00:28:20.113 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:20.113 [2024-07-25 19:58:29.413448] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:20.113 [2024-07-25 19:58:29.413492] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071691 ] 00:28:20.113 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.113 [2024-07-25 19:58:29.446819] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:20.113 [2024-07-25 19:58:29.446863] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:20.113 [2024-07-25 19:58:29.446872] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:20.113 [2024-07-25 19:58:29.446888] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:20.113 [2024-07-25 19:58:29.446900] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:20.113 [2024-07-25 19:58:29.447100] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:20.113 [2024-07-25 19:58:29.447141] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6a1980 0 00:28:20.113 [2024-07-25 19:58:29.454078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:20.113 [2024-07-25 19:58:29.454096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:20.113 [2024-07-25 19:58:29.454119] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:20.113 [2024-07-25 19:58:29.454126] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:20.113 [2024-07-25 19:58:29.454165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.454177] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.454184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.113 [2024-07-25 19:58:29.454198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:20.113 [2024-07-25 19:58:29.454224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.113 [2024-07-25 19:58:29.462072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.113 [2024-07-25 19:58:29.462089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.113 [2024-07-25 19:58:29.462097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.462104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.113 [2024-07-25 19:58:29.462144] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:20.113 [2024-07-25 19:58:29.462156] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:20.113 [2024-07-25 19:58:29.462166] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:20.113 [2024-07-25 19:58:29.462185] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.462193] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.113 [2024-07-25 19:58:29.462200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.114 [2024-07-25 19:58:29.462212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.114 [2024-07-25 19:58:29.462235] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.114 [2024-07-25 19:58:29.462366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.114 [2024-07-25 19:58:29.462381] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.114 [2024-07-25 19:58:29.462388] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.114 [2024-07-25 19:58:29.462407] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:20.114 [2024-07-25 19:58:29.462425] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:20.114 [2024-07-25 19:58:29.462448] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.114 [2024-07-25 19:58:29.462473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.114 [2024-07-25 19:58:29.462495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.114 [2024-07-25 19:58:29.462592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.114 [2024-07-25 19:58:29.462605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.114 [2024-07-25 19:58:29.462612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462619] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.114 [2024-07-25 19:58:29.462627] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:20.114 [2024-07-25 19:58:29.462641] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:20.114 [2024-07-25 19:58:29.462653] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462661] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462668] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.114 [2024-07-25 19:58:29.462678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.114 [2024-07-25 19:58:29.462699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.114 [2024-07-25 19:58:29.462792] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.114 [2024-07-25 19:58:29.462806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.114 [2024-07-25 19:58:29.462813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462820] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.114 [2024-07-25 19:58:29.462829] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:20.114 [2024-07-25 19:58:29.462846] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462855] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.462862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.114 [2024-07-25 19:58:29.462872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.114 [2024-07-25 19:58:29.462893] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.114 [2024-07-25 19:58:29.462992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.114 [2024-07-25 19:58:29.463006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.114 [2024-07-25 19:58:29.463013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463020] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.114 [2024-07-25 19:58:29.463028] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:20.114 [2024-07-25 19:58:29.463036] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:20.114 [2024-07-25 19:58:29.463050] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:20.114 [2024-07-25 19:58:29.463171] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:20.114 [2024-07-25 19:58:29.463180] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:20.114 [2024-07-25 19:58:29.463192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.114 [2024-07-25 19:58:29.463217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.114 [2024-07-25 19:58:29.463239] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.114 [2024-07-25 19:58:29.463365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.114 [2024-07-25 19:58:29.463380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.114 [2024-07-25 19:58:29.463386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463393] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.114 [2024-07-25 19:58:29.463401] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:20.114 [2024-07-25 19:58:29.463418] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.114 [2024-07-25 19:58:29.463445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.114 [2024-07-25 19:58:29.463465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.114 [2024-07-25 19:58:29.463579] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.114 [2024-07-25 19:58:29.463592] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.114 [2024-07-25 19:58:29.463598] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.114 [2024-07-25 19:58:29.463605] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.114 [2024-07-25 19:58:29.463613] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:20.114 [2024-07-25 19:58:29.463621] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:20.114 [2024-07-25 19:58:29.463635] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:20.115 [2024-07-25 19:58:29.463648] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.463664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.463672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.463683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.115 [2024-07-25 19:58:29.463704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.115 [2024-07-25 19:58:29.463845] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.115 [2024-07-25 19:58:29.463860] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.115 [2024-07-25 19:58:29.463867] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.463874] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=4096, cccid=0 00:28:20.115 [2024-07-25 19:58:29.463886] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7094c0) on tqpair(0x6a1980): expected_datao=0, payload_size=4096 00:28:20.115 [2024-07-25 19:58:29.463894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.463905] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.463912] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.463946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.115 [2024-07-25 19:58:29.463958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.115 [2024-07-25 19:58:29.463964] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.463971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.115 [2024-07-25 19:58:29.463986] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:20.115 [2024-07-25 19:58:29.463996] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:20.115 [2024-07-25 19:58:29.464004] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:20.115 [2024-07-25 19:58:29.464011] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:20.115 [2024-07-25 19:58:29.464018] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:20.115 [2024-07-25 19:58:29.464026] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464041] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464076] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:20.115 [2024-07-25 19:58:29.464109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.115 [2024-07-25 19:58:29.464249] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.115 [2024-07-25 19:58:29.464262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.115 [2024-07-25 19:58:29.464268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464275] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7094c0) on tqpair=0x6a1980 00:28:20.115 [2024-07-25 19:58:29.464286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464300] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.115 [2024-07-25 19:58:29.464320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.115 [2024-07-25 19:58:29.464351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464358] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.115 [2024-07-25 19:58:29.464386] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464393] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.115 [2024-07-25 19:58:29.464416] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464451] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464464] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.115 [2024-07-25 19:58:29.464502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7094c0, cid 0, qid 0 00:28:20.115 [2024-07-25 19:58:29.464528] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709620, cid 1, qid 0 00:28:20.115 [2024-07-25 19:58:29.464536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709780, cid 2, qid 0 00:28:20.115 [2024-07-25 19:58:29.464544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7098e0, cid 3, qid 0 00:28:20.115 [2024-07-25 19:58:29.464552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.115 [2024-07-25 19:58:29.464689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.115 [2024-07-25 19:58:29.464701] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.115 [2024-07-25 19:58:29.464708] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464715] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.115 [2024-07-25 19:58:29.464723] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:20.115 [2024-07-25 19:58:29.464732] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464746] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464757] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.464767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.464781] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.464791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:20.115 [2024-07-25 19:58:29.464812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.115 [2024-07-25 19:58:29.465005] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.115 [2024-07-25 19:58:29.465021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.115 [2024-07-25 19:58:29.465027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.465034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.115 [2024-07-25 19:58:29.465115] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.465137] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.465152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.465160] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.115 [2024-07-25 19:58:29.465171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.115 [2024-07-25 19:58:29.465192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.115 [2024-07-25 19:58:29.469071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.115 [2024-07-25 19:58:29.469088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.115 [2024-07-25 19:58:29.469095] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.469102] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=4096, cccid=4 00:28:20.115 [2024-07-25 19:58:29.469110] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709a40) on tqpair(0x6a1980): expected_datao=0, payload_size=4096 00:28:20.115 [2024-07-25 19:58:29.469117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.469127] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.469135] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.469143] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.115 [2024-07-25 19:58:29.469152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.115 [2024-07-25 19:58:29.469158] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.115 [2024-07-25 19:58:29.469165] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.115 [2024-07-25 19:58:29.469179] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:20.115 [2024-07-25 19:58:29.469202] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.469220] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:20.115 [2024-07-25 19:58:29.469234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.469253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.469276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.116 [2024-07-25 19:58:29.469479] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.116 [2024-07-25 19:58:29.469495] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.116 [2024-07-25 19:58:29.469501] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469508] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=4096, cccid=4 00:28:20.116 [2024-07-25 19:58:29.469516] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709a40) on tqpair(0x6a1980): expected_datao=0, payload_size=4096 00:28:20.116 [2024-07-25 19:58:29.469524] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469534] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469541] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469580] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.469594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.469601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.469627] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.469645] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.469660] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.469678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.469700] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.116 [2024-07-25 19:58:29.469848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.116 [2024-07-25 19:58:29.469864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.116 [2024-07-25 19:58:29.469870] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469877] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=4096, cccid=4 00:28:20.116 [2024-07-25 19:58:29.469884] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709a40) on tqpair(0x6a1980): expected_datao=0, payload_size=4096 00:28:20.116 [2024-07-25 19:58:29.469892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469902] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469909] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.469957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.469963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.469970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.469983] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.469998] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.470013] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.470024] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.470033] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.470041] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:20.116 [2024-07-25 19:58:29.470049] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:20.116 [2024-07-25 19:58:29.470067] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:20.116 [2024-07-25 19:58:29.470091] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.470111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.470126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470133] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.470149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.116 [2024-07-25 19:58:29.470173] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.116 [2024-07-25 19:58:29.470185] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709ba0, cid 5, qid 0 00:28:20.116 [2024-07-25 19:58:29.470329] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.470343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.470350] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.470368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.470377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.470384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709ba0) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.470406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470415] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.470426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.470447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709ba0, cid 5, qid 0 00:28:20.116 [2024-07-25 19:58:29.470589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.470603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.470610] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709ba0) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.470633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.470653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.470673] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709ba0, cid 5, qid 0 00:28:20.116 [2024-07-25 19:58:29.470789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.470804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.470811] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470818] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709ba0) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.470834] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470843] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.470854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.470874] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709ba0, cid 5, qid 0 00:28:20.116 [2024-07-25 19:58:29.470966] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.116 [2024-07-25 19:58:29.470984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.116 [2024-07-25 19:58:29.470991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.470998] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709ba0) on tqpair=0x6a1980 00:28:20.116 [2024-07-25 19:58:29.471018] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.471028] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.471038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.471050] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.471065] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.471077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.471089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.471097] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.471106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.471118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.116 [2024-07-25 19:58:29.471125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6a1980) 00:28:20.116 [2024-07-25 19:58:29.471134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.116 [2024-07-25 19:58:29.471156] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709ba0, cid 5, qid 0 00:28:20.116 [2024-07-25 19:58:29.471167] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709a40, cid 4, qid 0 00:28:20.116 [2024-07-25 19:58:29.471175] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709d00, cid 6, qid 0 00:28:20.116 [2024-07-25 19:58:29.471183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709e60, cid 7, qid 0 00:28:20.116 [2024-07-25 19:58:29.471384] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.117 [2024-07-25 19:58:29.471400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.117 [2024-07-25 19:58:29.471406] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471413] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=8192, cccid=5 00:28:20.117 [2024-07-25 19:58:29.471421] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709ba0) on tqpair(0x6a1980): expected_datao=0, payload_size=8192 00:28:20.117 [2024-07-25 19:58:29.471429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471451] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471459] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.117 [2024-07-25 19:58:29.471482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.117 [2024-07-25 19:58:29.471488] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471495] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=512, cccid=4 00:28:20.117 [2024-07-25 19:58:29.471502] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709a40) on tqpair(0x6a1980): expected_datao=0, payload_size=512 00:28:20.117 [2024-07-25 19:58:29.471510] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471519] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471529] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471538] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.117 [2024-07-25 19:58:29.471547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.117 [2024-07-25 19:58:29.471554] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471560] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=512, cccid=6 00:28:20.117 [2024-07-25 19:58:29.471567] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709d00) on tqpair(0x6a1980): expected_datao=0, payload_size=512 00:28:20.117 [2024-07-25 19:58:29.471575] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471584] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471591] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.117 [2024-07-25 19:58:29.471608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.117 [2024-07-25 19:58:29.471614] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471620] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6a1980): datao=0, datal=4096, cccid=7 00:28:20.117 [2024-07-25 19:58:29.471628] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x709e60) on tqpair(0x6a1980): expected_datao=0, payload_size=4096 00:28:20.117 [2024-07-25 19:58:29.471635] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471645] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471652] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471663] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.117 [2024-07-25 19:58:29.471673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.117 [2024-07-25 19:58:29.471679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709ba0) on tqpair=0x6a1980 00:28:20.117 [2024-07-25 19:58:29.471704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.117 [2024-07-25 19:58:29.471715] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.117 [2024-07-25 19:58:29.471722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471728] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709a40) on tqpair=0x6a1980 00:28:20.117 [2024-07-25 19:58:29.471742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.117 [2024-07-25 19:58:29.471767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.117 [2024-07-25 19:58:29.471773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709d00) on tqpair=0x6a1980 00:28:20.117 [2024-07-25 19:58:29.471793] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.117 [2024-07-25 19:58:29.471803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.117 [2024-07-25 19:58:29.471810] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.117 [2024-07-25 19:58:29.471830] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709e60) on tqpair=0x6a1980 00:28:20.117 ===================================================== 00:28:20.117 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.117 ===================================================== 00:28:20.117 Controller Capabilities/Features 00:28:20.117 ================================ 00:28:20.117 Vendor ID: 8086 00:28:20.117 Subsystem Vendor ID: 8086 00:28:20.117 Serial Number: SPDK00000000000001 00:28:20.117 Model Number: SPDK bdev Controller 00:28:20.117 Firmware Version: 24.05.1 00:28:20.117 Recommended Arb Burst: 6 00:28:20.117 IEEE OUI Identifier: e4 d2 5c 00:28:20.117 Multi-path I/O 00:28:20.117 May have multiple subsystem ports: Yes 00:28:20.117 May have multiple controllers: Yes 00:28:20.117 Associated with SR-IOV VF: No 00:28:20.117 Max Data Transfer Size: 131072 00:28:20.117 Max Number of Namespaces: 32 00:28:20.117 Max Number of I/O Queues: 127 00:28:20.117 NVMe Specification Version (VS): 1.3 00:28:20.117 NVMe Specification Version (Identify): 1.3 00:28:20.117 Maximum Queue Entries: 128 00:28:20.117 Contiguous Queues Required: Yes 00:28:20.117 Arbitration Mechanisms Supported 00:28:20.117 Weighted Round Robin: Not Supported 00:28:20.117 Vendor Specific: Not Supported 00:28:20.117 Reset Timeout: 15000 ms 00:28:20.117 Doorbell Stride: 4 bytes 00:28:20.117 NVM Subsystem Reset: Not Supported 00:28:20.117 Command Sets Supported 00:28:20.117 NVM Command Set: Supported 00:28:20.117 Boot Partition: Not Supported 00:28:20.117 Memory Page Size Minimum: 4096 bytes 00:28:20.117 Memory Page Size Maximum: 4096 bytes 00:28:20.117 Persistent Memory Region: Not Supported 00:28:20.117 Optional Asynchronous Events Supported 00:28:20.117 Namespace Attribute Notices: Supported 00:28:20.117 Firmware Activation Notices: Not Supported 00:28:20.117 ANA Change Notices: Not Supported 00:28:20.117 PLE Aggregate Log Change Notices: Not Supported 00:28:20.117 LBA Status Info Alert Notices: Not Supported 00:28:20.117 EGE Aggregate Log Change Notices: Not Supported 00:28:20.117 Normal NVM Subsystem Shutdown event: Not Supported 00:28:20.117 Zone Descriptor Change Notices: Not Supported 00:28:20.117 Discovery Log Change Notices: Not Supported 00:28:20.117 Controller Attributes 00:28:20.117 128-bit Host Identifier: Supported 00:28:20.117 Non-Operational Permissive Mode: Not Supported 00:28:20.117 NVM Sets: Not Supported 00:28:20.117 Read Recovery Levels: Not Supported 00:28:20.117 Endurance Groups: Not Supported 00:28:20.117 Predictable Latency Mode: Not Supported 00:28:20.117 Traffic Based Keep ALive: Not Supported 00:28:20.117 Namespace Granularity: Not Supported 00:28:20.117 SQ Associations: Not Supported 00:28:20.117 UUID List: Not Supported 00:28:20.117 Multi-Domain Subsystem: Not Supported 00:28:20.117 Fixed Capacity Management: Not Supported 00:28:20.117 Variable Capacity Management: Not Supported 00:28:20.117 Delete Endurance Group: Not Supported 00:28:20.117 Delete NVM Set: Not Supported 00:28:20.117 Extended LBA Formats Supported: Not Supported 00:28:20.117 Flexible Data Placement Supported: Not Supported 00:28:20.117 00:28:20.117 Controller Memory Buffer Support 00:28:20.117 ================================ 00:28:20.117 Supported: No 00:28:20.117 00:28:20.117 Persistent Memory Region Support 00:28:20.117 ================================ 00:28:20.117 Supported: No 00:28:20.117 00:28:20.117 Admin Command Set Attributes 00:28:20.117 ============================ 00:28:20.117 Security Send/Receive: Not Supported 00:28:20.117 Format NVM: Not Supported 00:28:20.117 Firmware Activate/Download: Not Supported 00:28:20.117 Namespace Management: Not Supported 00:28:20.117 Device Self-Test: Not Supported 00:28:20.117 Directives: Not Supported 00:28:20.117 NVMe-MI: Not Supported 00:28:20.117 Virtualization Management: Not Supported 00:28:20.117 Doorbell Buffer Config: Not Supported 00:28:20.117 Get LBA Status Capability: Not Supported 00:28:20.117 Command & Feature Lockdown Capability: Not Supported 00:28:20.117 Abort Command Limit: 4 00:28:20.117 Async Event Request Limit: 4 00:28:20.117 Number of Firmware Slots: N/A 00:28:20.117 Firmware Slot 1 Read-Only: N/A 00:28:20.117 Firmware Activation Without Reset: N/A 00:28:20.117 Multiple Update Detection Support: N/A 00:28:20.117 Firmware Update Granularity: No Information Provided 00:28:20.117 Per-Namespace SMART Log: No 00:28:20.117 Asymmetric Namespace Access Log Page: Not Supported 00:28:20.117 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:20.117 Command Effects Log Page: Supported 00:28:20.117 Get Log Page Extended Data: Supported 00:28:20.117 Telemetry Log Pages: Not Supported 00:28:20.117 Persistent Event Log Pages: Not Supported 00:28:20.117 Supported Log Pages Log Page: May Support 00:28:20.117 Commands Supported & Effects Log Page: Not Supported 00:28:20.117 Feature Identifiers & Effects Log Page:May Support 00:28:20.117 NVMe-MI Commands & Effects Log Page: May Support 00:28:20.117 Data Area 4 for Telemetry Log: Not Supported 00:28:20.117 Error Log Page Entries Supported: 128 00:28:20.118 Keep Alive: Supported 00:28:20.118 Keep Alive Granularity: 10000 ms 00:28:20.118 00:28:20.118 NVM Command Set Attributes 00:28:20.118 ========================== 00:28:20.118 Submission Queue Entry Size 00:28:20.118 Max: 64 00:28:20.118 Min: 64 00:28:20.118 Completion Queue Entry Size 00:28:20.118 Max: 16 00:28:20.118 Min: 16 00:28:20.118 Number of Namespaces: 32 00:28:20.118 Compare Command: Supported 00:28:20.118 Write Uncorrectable Command: Not Supported 00:28:20.118 Dataset Management Command: Supported 00:28:20.118 Write Zeroes Command: Supported 00:28:20.118 Set Features Save Field: Not Supported 00:28:20.118 Reservations: Supported 00:28:20.118 Timestamp: Not Supported 00:28:20.118 Copy: Supported 00:28:20.118 Volatile Write Cache: Present 00:28:20.118 Atomic Write Unit (Normal): 1 00:28:20.118 Atomic Write Unit (PFail): 1 00:28:20.118 Atomic Compare & Write Unit: 1 00:28:20.118 Fused Compare & Write: Supported 00:28:20.118 Scatter-Gather List 00:28:20.118 SGL Command Set: Supported 00:28:20.118 SGL Keyed: Supported 00:28:20.118 SGL Bit Bucket Descriptor: Not Supported 00:28:20.118 SGL Metadata Pointer: Not Supported 00:28:20.118 Oversized SGL: Not Supported 00:28:20.118 SGL Metadata Address: Not Supported 00:28:20.118 SGL Offset: Supported 00:28:20.118 Transport SGL Data Block: Not Supported 00:28:20.118 Replay Protected Memory Block: Not Supported 00:28:20.118 00:28:20.118 Firmware Slot Information 00:28:20.118 ========================= 00:28:20.118 Active slot: 1 00:28:20.118 Slot 1 Firmware Revision: 24.05.1 00:28:20.118 00:28:20.118 00:28:20.118 Commands Supported and Effects 00:28:20.118 ============================== 00:28:20.118 Admin Commands 00:28:20.118 -------------- 00:28:20.118 Get Log Page (02h): Supported 00:28:20.118 Identify (06h): Supported 00:28:20.118 Abort (08h): Supported 00:28:20.118 Set Features (09h): Supported 00:28:20.118 Get Features (0Ah): Supported 00:28:20.118 Asynchronous Event Request (0Ch): Supported 00:28:20.118 Keep Alive (18h): Supported 00:28:20.118 I/O Commands 00:28:20.118 ------------ 00:28:20.118 Flush (00h): Supported LBA-Change 00:28:20.118 Write (01h): Supported LBA-Change 00:28:20.118 Read (02h): Supported 00:28:20.118 Compare (05h): Supported 00:28:20.118 Write Zeroes (08h): Supported LBA-Change 00:28:20.118 Dataset Management (09h): Supported LBA-Change 00:28:20.118 Copy (19h): Supported LBA-Change 00:28:20.118 Unknown (79h): Supported LBA-Change 00:28:20.118 Unknown (7Ah): Supported 00:28:20.118 00:28:20.118 Error Log 00:28:20.118 ========= 00:28:20.118 00:28:20.118 Arbitration 00:28:20.118 =========== 00:28:20.118 Arbitration Burst: 1 00:28:20.118 00:28:20.118 Power Management 00:28:20.118 ================ 00:28:20.118 Number of Power States: 1 00:28:20.118 Current Power State: Power State #0 00:28:20.118 Power State #0: 00:28:20.118 Max Power: 0.00 W 00:28:20.118 Non-Operational State: Operational 00:28:20.118 Entry Latency: Not Reported 00:28:20.118 Exit Latency: Not Reported 00:28:20.118 Relative Read Throughput: 0 00:28:20.118 Relative Read Latency: 0 00:28:20.118 Relative Write Throughput: 0 00:28:20.118 Relative Write Latency: 0 00:28:20.118 Idle Power: Not Reported 00:28:20.118 Active Power: Not Reported 00:28:20.118 Non-Operational Permissive Mode: Not Supported 00:28:20.118 00:28:20.118 Health Information 00:28:20.118 ================== 00:28:20.118 Critical Warnings: 00:28:20.118 Available Spare Space: OK 00:28:20.118 Temperature: OK 00:28:20.118 Device Reliability: OK 00:28:20.118 Read Only: No 00:28:20.118 Volatile Memory Backup: OK 00:28:20.118 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:20.118 Temperature Threshold: [2024-07-25 19:58:29.471958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.471970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6a1980) 00:28:20.118 [2024-07-25 19:58:29.471981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.118 [2024-07-25 19:58:29.472002] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x709e60, cid 7, qid 0 00:28:20.118 [2024-07-25 19:58:29.472155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.118 [2024-07-25 19:58:29.472174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.118 [2024-07-25 19:58:29.472181] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x709e60) on tqpair=0x6a1980 00:28:20.118 [2024-07-25 19:58:29.472227] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:20.118 [2024-07-25 19:58:29.472248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.118 [2024-07-25 19:58:29.472260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.118 [2024-07-25 19:58:29.472270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.118 [2024-07-25 19:58:29.472280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.118 [2024-07-25 19:58:29.472292] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472306] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6a1980) 00:28:20.118 [2024-07-25 19:58:29.472317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.118 [2024-07-25 19:58:29.472339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7098e0, cid 3, qid 0 00:28:20.118 [2024-07-25 19:58:29.472483] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.118 [2024-07-25 19:58:29.472498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.118 [2024-07-25 19:58:29.472505] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472511] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7098e0) on tqpair=0x6a1980 00:28:20.118 [2024-07-25 19:58:29.472522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6a1980) 00:28:20.118 [2024-07-25 19:58:29.472547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.118 [2024-07-25 19:58:29.472574] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7098e0, cid 3, qid 0 00:28:20.118 [2024-07-25 19:58:29.472693] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.118 [2024-07-25 19:58:29.472705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.118 [2024-07-25 19:58:29.472712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472719] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7098e0) on tqpair=0x6a1980 00:28:20.118 [2024-07-25 19:58:29.472727] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:20.118 [2024-07-25 19:58:29.472735] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:20.118 [2024-07-25 19:58:29.472751] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472760] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6a1980) 00:28:20.118 [2024-07-25 19:58:29.472777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.118 [2024-07-25 19:58:29.472797] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7098e0, cid 3, qid 0 00:28:20.118 [2024-07-25 19:58:29.472888] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.118 [2024-07-25 19:58:29.472900] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.118 [2024-07-25 19:58:29.472911] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472918] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7098e0) on tqpair=0x6a1980 00:28:20.118 [2024-07-25 19:58:29.472935] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.472951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6a1980) 00:28:20.118 [2024-07-25 19:58:29.472961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.118 [2024-07-25 19:58:29.472981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7098e0, cid 3, qid 0 00:28:20.118 [2024-07-25 19:58:29.477070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.118 [2024-07-25 19:58:29.477087] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.118 [2024-07-25 19:58:29.477093] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.477100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7098e0) on tqpair=0x6a1980 00:28:20.118 [2024-07-25 19:58:29.477134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.477144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.118 [2024-07-25 19:58:29.477151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6a1980) 00:28:20.118 [2024-07-25 19:58:29.477162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.118 [2024-07-25 19:58:29.477185] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7098e0, cid 3, qid 0 00:28:20.119 [2024-07-25 19:58:29.477330] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.119 [2024-07-25 19:58:29.477345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.119 [2024-07-25 19:58:29.477351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.119 [2024-07-25 19:58:29.477358] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7098e0) on tqpair=0x6a1980 00:28:20.119 [2024-07-25 19:58:29.477372] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:20.119 0 Kelvin (-273 Celsius) 00:28:20.119 Available Spare: 0% 00:28:20.119 Available Spare Threshold: 0% 00:28:20.119 Life Percentage Used: 0% 00:28:20.119 Data Units Read: 0 00:28:20.119 Data Units Written: 0 00:28:20.119 Host Read Commands: 0 00:28:20.119 Host Write Commands: 0 00:28:20.119 Controller Busy Time: 0 minutes 00:28:20.119 Power Cycles: 0 00:28:20.119 Power On Hours: 0 hours 00:28:20.119 Unsafe Shutdowns: 0 00:28:20.119 Unrecoverable Media Errors: 0 00:28:20.119 Lifetime Error Log Entries: 0 00:28:20.119 Warning Temperature Time: 0 minutes 00:28:20.119 Critical Temperature Time: 0 minutes 00:28:20.119 00:28:20.119 Number of Queues 00:28:20.119 ================ 00:28:20.119 Number of I/O Submission Queues: 127 00:28:20.119 Number of I/O Completion Queues: 127 00:28:20.119 00:28:20.119 Active Namespaces 00:28:20.119 ================= 00:28:20.119 Namespace ID:1 00:28:20.119 Error Recovery Timeout: Unlimited 00:28:20.119 Command Set Identifier: NVM (00h) 00:28:20.119 Deallocate: Supported 00:28:20.119 Deallocated/Unwritten Error: Not Supported 00:28:20.119 Deallocated Read Value: Unknown 00:28:20.119 Deallocate in Write Zeroes: Not Supported 00:28:20.119 Deallocated Guard Field: 0xFFFF 00:28:20.119 Flush: Supported 00:28:20.119 Reservation: Supported 00:28:20.119 Namespace Sharing Capabilities: Multiple Controllers 00:28:20.119 Size (in LBAs): 131072 (0GiB) 00:28:20.119 Capacity (in LBAs): 131072 (0GiB) 00:28:20.119 Utilization (in LBAs): 131072 (0GiB) 00:28:20.119 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:20.119 EUI64: ABCDEF0123456789 00:28:20.119 UUID: 128a1300-fcea-4d1e-8048-1b4f939dd82d 00:28:20.119 Thin Provisioning: Not Supported 00:28:20.119 Per-NS Atomic Units: Yes 00:28:20.119 Atomic Boundary Size (Normal): 0 00:28:20.119 Atomic Boundary Size (PFail): 0 00:28:20.119 Atomic Boundary Offset: 0 00:28:20.119 Maximum Single Source Range Length: 65535 00:28:20.119 Maximum Copy Length: 65535 00:28:20.119 Maximum Source Range Count: 1 00:28:20.119 NGUID/EUI64 Never Reused: No 00:28:20.119 Namespace Write Protected: No 00:28:20.119 Number of LBA Formats: 1 00:28:20.119 Current LBA Format: LBA Format #00 00:28:20.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:20.119 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.119 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.119 rmmod nvme_tcp 00:28:20.119 rmmod nvme_fabrics 00:28:20.119 rmmod nvme_keyring 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4071542 ']' 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4071542 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 4071542 ']' 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 4071542 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4071542 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4071542' 00:28:20.377 killing process with pid 4071542 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 4071542 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 4071542 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.377 19:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.916 19:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.916 00:28:22.916 real 0m5.143s 00:28:22.916 user 0m4.033s 00:28:22.916 sys 0m1.722s 00:28:22.916 19:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:22.916 19:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.916 ************************************ 00:28:22.916 END TEST nvmf_identify 00:28:22.916 ************************************ 00:28:22.916 19:58:31 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:22.916 19:58:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:22.916 19:58:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:22.916 19:58:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.916 ************************************ 00:28:22.916 START TEST nvmf_perf 00:28:22.916 ************************************ 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:22.916 * Looking for test storage... 00:28:22.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.916 19:58:31 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.917 19:58:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:24.817 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.818 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.818 19:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:24.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:28:24.818 00:28:24.818 --- 10.0.0.2 ping statistics --- 00:28:24.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.818 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:28:24.818 00:28:24.818 --- 10.0.0.1 ping statistics --- 00:28:24.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.818 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4073618 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4073618 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 4073618 ']' 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:24.818 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.818 [2024-07-25 19:58:34.181409] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:24.818 [2024-07-25 19:58:34.181487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.818 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.077 [2024-07-25 19:58:34.262095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.077 [2024-07-25 19:58:34.357482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.077 [2024-07-25 19:58:34.357544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.077 [2024-07-25 19:58:34.357560] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.077 [2024-07-25 19:58:34.357573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.077 [2024-07-25 19:58:34.357586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.077 [2024-07-25 19:58:34.357652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.077 [2024-07-25 19:58:34.357735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.077 [2024-07-25 19:58:34.357823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.077 [2024-07-25 19:58:34.357826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:25.077 19:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:28.364 19:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:28.364 19:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:28.622 19:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:28.622 19:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:28.880 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:28.880 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:28.880 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:28.880 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:28.880 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:29.137 [2024-07-25 19:58:38.340434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.137 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.395 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:29.395 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.653 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:29.653 19:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:29.912 19:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.912 [2024-07-25 19:58:39.332144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.170 19:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.427 19:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:30.427 19:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:30.427 19:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:30.427 19:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:31.805 Initializing NVMe Controllers 00:28:31.805 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:31.805 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:31.805 Initialization complete. Launching workers. 00:28:31.805 ======================================================== 00:28:31.805 Latency(us) 00:28:31.805 Device Information : IOPS MiB/s Average min max 00:28:31.805 PCIE (0000:88:00.0) NSID 1 from core 0: 85098.94 332.42 375.44 46.09 4329.74 00:28:31.805 ======================================================== 00:28:31.805 Total : 85098.94 332.42 375.44 46.09 4329.74 00:28:31.805 00:28:31.805 19:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.805 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.737 Initializing NVMe Controllers 00:28:32.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.737 Initialization complete. Launching workers. 00:28:32.737 ======================================================== 00:28:32.737 Latency(us) 00:28:32.737 Device Information : IOPS MiB/s Average min max 00:28:32.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.00 0.36 11230.14 152.11 45804.33 00:28:32.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16330.24 6980.96 47897.19 00:28:32.737 ======================================================== 00:28:32.737 Total : 155.00 0.61 13270.18 152.11 47897.19 00:28:32.737 00:28:32.737 19:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.998 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.960 Initializing NVMe Controllers 00:28:33.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:33.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:33.960 Initialization complete. Launching workers. 00:28:33.960 ======================================================== 00:28:33.960 Latency(us) 00:28:33.960 Device Information : IOPS MiB/s Average min max 00:28:33.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8523.98 33.30 3769.86 610.32 7443.83 00:28:33.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.99 15.04 8348.69 5880.62 16078.00 00:28:33.960 ======================================================== 00:28:33.960 Total : 12372.98 48.33 5194.25 610.32 16078.00 00:28:33.960 00:28:34.216 19:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:34.217 19:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:34.217 19:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.217 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.749 Initializing NVMe Controllers 00:28:36.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.749 Controller IO queue size 128, less than required. 00:28:36.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.749 Controller IO queue size 128, less than required. 00:28:36.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:36.749 Initialization complete. Launching workers. 00:28:36.749 ======================================================== 00:28:36.749 Latency(us) 00:28:36.749 Device Information : IOPS MiB/s Average min max 00:28:36.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.43 421.86 77139.59 45058.10 108891.69 00:28:36.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.14 141.79 231991.60 81785.62 376989.05 00:28:36.749 ======================================================== 00:28:36.749 Total : 2254.58 563.64 116092.83 45058.10 376989.05 00:28:36.749 00:28:36.749 19:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:36.749 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.006 No valid NVMe controllers or AIO or URING devices found 00:28:37.006 Initializing NVMe Controllers 00:28:37.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.006 Controller IO queue size 128, less than required. 00:28:37.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:37.006 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:37.006 Controller IO queue size 128, less than required. 00:28:37.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:37.006 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:37.006 WARNING: Some requested NVMe devices were skipped 00:28:37.006 19:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:37.006 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.540 Initializing NVMe Controllers 00:28:39.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.540 Controller IO queue size 128, less than required. 00:28:39.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.540 Controller IO queue size 128, less than required. 00:28:39.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:39.540 Initialization complete. Launching workers. 00:28:39.540 00:28:39.540 ==================== 00:28:39.540 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:39.540 TCP transport: 00:28:39.540 polls: 11278 00:28:39.540 idle_polls: 5513 00:28:39.540 sock_completions: 5765 00:28:39.540 nvme_completions: 6791 00:28:39.540 submitted_requests: 10304 00:28:39.540 queued_requests: 1 00:28:39.540 00:28:39.540 ==================== 00:28:39.540 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:39.540 TCP transport: 00:28:39.540 polls: 11737 00:28:39.540 idle_polls: 7537 00:28:39.540 sock_completions: 4200 00:28:39.540 nvme_completions: 4111 00:28:39.540 submitted_requests: 6138 00:28:39.540 queued_requests: 1 00:28:39.540 ======================================================== 00:28:39.540 Latency(us) 00:28:39.540 Device Information : IOPS MiB/s Average min max 00:28:39.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1695.89 423.97 77183.63 54101.78 133127.21 00:28:39.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1026.53 256.63 127800.77 58894.07 200452.10 00:28:39.540 ======================================================== 00:28:39.540 Total : 2722.42 680.60 96269.54 54101.78 200452.10 00:28:39.540 00:28:39.540 19:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:39.540 19:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.798 19:58:49 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:39.798 19:58:49 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:39.798 19:58:49 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=318d0f3a-21cc-4c78-a7a6-1472c8b91ec6 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 318d0f3a-21cc-4c78-a7a6-1472c8b91ec6 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=318d0f3a-21cc-4c78-a7a6-1472c8b91ec6 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:43.083 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:43.340 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:43.340 { 00:28:43.341 "uuid": "318d0f3a-21cc-4c78-a7a6-1472c8b91ec6", 00:28:43.341 "name": "lvs_0", 00:28:43.341 "base_bdev": "Nvme0n1", 00:28:43.341 "total_data_clusters": 238234, 00:28:43.341 "free_clusters": 238234, 00:28:43.341 "block_size": 512, 00:28:43.341 "cluster_size": 4194304 00:28:43.341 } 00:28:43.341 ]' 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="318d0f3a-21cc-4c78-a7a6-1472c8b91ec6") .free_clusters' 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="318d0f3a-21cc-4c78-a7a6-1472c8b91ec6") .cluster_size' 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:43.341 952936 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:43.341 19:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 318d0f3a-21cc-4c78-a7a6-1472c8b91ec6 lbd_0 20480 00:28:43.907 19:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4ebd170f-399c-49f6-9ed5-0ee88f7e5633 00:28:43.907 19:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4ebd170f-399c-49f6-9ed5-0ee88f7e5633 lvs_n_0 00:28:44.840 19:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8fa3f2d0-fcb5-433f-a544-da25eb755a7b 00:28:44.841 19:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8fa3f2d0-fcb5-433f-a544-da25eb755a7b 00:28:44.841 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=8fa3f2d0-fcb5-433f-a544-da25eb755a7b 00:28:44.841 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:44.841 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:44.841 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:44.841 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:45.099 { 00:28:45.099 "uuid": "318d0f3a-21cc-4c78-a7a6-1472c8b91ec6", 00:28:45.099 "name": "lvs_0", 00:28:45.099 "base_bdev": "Nvme0n1", 00:28:45.099 "total_data_clusters": 238234, 00:28:45.099 "free_clusters": 233114, 00:28:45.099 "block_size": 512, 00:28:45.099 "cluster_size": 4194304 00:28:45.099 }, 00:28:45.099 { 00:28:45.099 "uuid": "8fa3f2d0-fcb5-433f-a544-da25eb755a7b", 00:28:45.099 "name": "lvs_n_0", 00:28:45.099 "base_bdev": "4ebd170f-399c-49f6-9ed5-0ee88f7e5633", 00:28:45.099 "total_data_clusters": 5114, 00:28:45.099 "free_clusters": 5114, 00:28:45.099 "block_size": 512, 00:28:45.099 "cluster_size": 4194304 00:28:45.099 } 00:28:45.099 ]' 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="8fa3f2d0-fcb5-433f-a544-da25eb755a7b") .free_clusters' 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="8fa3f2d0-fcb5-433f-a544-da25eb755a7b") .cluster_size' 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:45.099 20456 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:45.099 19:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8fa3f2d0-fcb5-433f-a544-da25eb755a7b lbd_nest_0 20456 00:28:45.357 19:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=1e75ee68-9294-425f-822d-4bd75ba834fb 00:28:45.357 19:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.614 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:45.614 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1e75ee68-9294-425f-822d-4bd75ba834fb 00:28:45.872 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.130 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:46.130 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:46.130 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:46.130 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:46.130 19:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.130 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.339 Initializing NVMe Controllers 00:28:58.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.339 Initialization complete. Launching workers. 00:28:58.339 ======================================================== 00:28:58.339 Latency(us) 00:28:58.339 Device Information : IOPS MiB/s Average min max 00:28:58.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.50 0.02 21523.11 181.93 48721.40 00:28:58.339 ======================================================== 00:28:58.339 Total : 46.50 0.02 21523.11 181.93 48721.40 00:28:58.339 00:28:58.339 19:59:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:58.339 19:59:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.339 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.352 Initializing NVMe Controllers 00:29:08.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:08.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:08.352 Initialization complete. Launching workers. 00:29:08.352 ======================================================== 00:29:08.352 Latency(us) 00:29:08.352 Device Information : IOPS MiB/s Average min max 00:29:08.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.49 10.19 12271.21 5024.79 47882.35 00:29:08.352 ======================================================== 00:29:08.352 Total : 81.49 10.19 12271.21 5024.79 47882.35 00:29:08.352 00:29:08.352 19:59:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:08.352 19:59:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:08.352 19:59:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.352 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.337 Initializing NVMe Controllers 00:29:18.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.337 Initialization complete. Launching workers. 00:29:18.337 ======================================================== 00:29:18.337 Latency(us) 00:29:18.337 Device Information : IOPS MiB/s Average min max 00:29:18.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7306.30 3.57 4379.83 292.15 12028.74 00:29:18.337 ======================================================== 00:29:18.337 Total : 7306.30 3.57 4379.83 292.15 12028.74 00:29:18.337 00:29:18.337 19:59:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:18.337 19:59:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.337 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.304 Initializing NVMe Controllers 00:29:28.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:28.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:28.304 Initialization complete. Launching workers. 00:29:28.304 ======================================================== 00:29:28.304 Latency(us) 00:29:28.304 Device Information : IOPS MiB/s Average min max 00:29:28.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3151.91 393.99 10151.85 630.45 23078.38 00:29:28.304 ======================================================== 00:29:28.304 Total : 3151.91 393.99 10151.85 630.45 23078.38 00:29:28.304 00:29:28.304 19:59:36 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:28.304 19:59:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:28.304 19:59:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.304 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.275 Initializing NVMe Controllers 00:29:38.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.275 Controller IO queue size 128, less than required. 00:29:38.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.275 Initialization complete. Launching workers. 00:29:38.275 ======================================================== 00:29:38.275 Latency(us) 00:29:38.275 Device Information : IOPS MiB/s Average min max 00:29:38.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11875.94 5.80 10782.18 1580.17 28705.70 00:29:38.275 ======================================================== 00:29:38.275 Total : 11875.94 5.80 10782.18 1580.17 28705.70 00:29:38.275 00:29:38.275 19:59:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:38.275 19:59:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.275 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.471 Initializing NVMe Controllers 00:29:50.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.471 Controller IO queue size 128, less than required. 00:29:50.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.471 Initialization complete. Launching workers. 00:29:50.471 ======================================================== 00:29:50.471 Latency(us) 00:29:50.471 Device Information : IOPS MiB/s Average min max 00:29:50.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1196.93 149.62 107603.58 16270.40 198423.80 00:29:50.471 ======================================================== 00:29:50.471 Total : 1196.93 149.62 107603.58 16270.40 198423.80 00:29:50.471 00:29:50.471 19:59:57 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.471 19:59:58 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e75ee68-9294-425f-822d-4bd75ba834fb 00:29:50.471 19:59:58 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ebd170f-399c-49f6-9ed5-0ee88f7e5633 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:50.471 rmmod nvme_tcp 00:29:50.471 rmmod nvme_fabrics 00:29:50.471 rmmod nvme_keyring 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4073618 ']' 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4073618 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 4073618 ']' 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 4073618 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4073618 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4073618' 00:29:50.471 killing process with pid 4073618 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 4073618 00:29:50.471 19:59:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 4073618 00:29:52.368 20:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:52.368 20:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.369 20:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.369 20:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.369 20:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.369 20:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.369 20:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.369 20:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.290 20:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:54.290 00:29:54.290 real 1m31.480s 00:29:54.290 user 5m36.997s 00:29:54.290 sys 0m16.253s 00:29:54.290 20:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:54.290 20:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:54.290 ************************************ 00:29:54.290 END TEST nvmf_perf 00:29:54.290 ************************************ 00:29:54.290 20:00:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:54.290 20:00:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:54.290 20:00:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:54.290 20:00:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:54.290 ************************************ 00:29:54.290 START TEST nvmf_fio_host 00:29:54.290 ************************************ 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:54.290 * Looking for test storage... 00:29:54.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.290 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:54.291 20:00:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:56.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:56.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:56.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:56.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:56.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:29:56.196 00:29:56.196 --- 10.0.0.2 ping statistics --- 00:29:56.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.196 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:29:56.196 00:29:56.196 --- 10.0.0.1 ping statistics --- 00:29:56.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.196 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:56.196 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4085701 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4085701 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 4085701 ']' 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:56.197 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.197 [2024-07-25 20:00:05.530015] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:29:56.197 [2024-07-25 20:00:05.530113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.197 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.197 [2024-07-25 20:00:05.601501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.454 [2024-07-25 20:00:05.692638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.454 [2024-07-25 20:00:05.692701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.454 [2024-07-25 20:00:05.692728] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.454 [2024-07-25 20:00:05.692741] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.454 [2024-07-25 20:00:05.692753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.454 [2024-07-25 20:00:05.692822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.454 [2024-07-25 20:00:05.692890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.454 [2024-07-25 20:00:05.692984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.454 [2024-07-25 20:00:05.692986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.454 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:56.454 20:00:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:56.454 20:00:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:56.710 [2024-07-25 20:00:06.063470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.710 20:00:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:56.710 20:00:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.710 20:00:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.710 20:00:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:56.966 Malloc1 00:29:56.967 20:00:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.223 20:00:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:57.481 20:00:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.738 [2024-07-25 20:00:07.125655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.738 20:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:57.996 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:58.253 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:58.253 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:58.253 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.253 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:58.253 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:58.253 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:58.254 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:58.254 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:58.254 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:58.254 20:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:58.254 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:58.254 fio-3.35 00:29:58.254 Starting 1 thread 00:29:58.254 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.779 00:30:00.779 test: (groupid=0, jobs=1): err= 0: pid=4086305: Thu Jul 25 20:00:09 2024 00:30:00.779 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:30:00.779 slat (nsec): min=1965, max=158686, avg=2656.35, stdev=2011.10 00:30:00.779 clat (usec): min=2510, max=13270, avg=7618.84, stdev=621.20 00:30:00.779 lat (usec): min=2532, max=13272, avg=7621.50, stdev=621.10 00:30:00.779 clat percentiles (usec): 00:30:00.779 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:30:00.779 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:30:00.779 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:30:00.779 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[12256], 00:30:00.779 | 99.99th=[13173] 00:30:00.779 bw ( KiB/s): min=35608, max=37216, per=99.88%, avg=36746.00, stdev=764.17, samples=4 00:30:00.779 iops : min= 8904, max= 9304, avg=9187.00, stdev=190.05, samples=4 00:30:00.779 write: IOPS=9202, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:30:00.779 slat (usec): min=2, max=103, avg= 2.78, stdev= 1.60 00:30:00.779 clat (usec): min=1122, max=11373, avg=6233.57, stdev=513.40 00:30:00.779 lat (usec): min=1128, max=11376, avg=6236.35, stdev=513.37 00:30:00.780 clat percentiles (usec): 00:30:00.780 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:30:00.780 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:30:00.780 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:30:00.780 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 9634], 99.95th=[10421], 00:30:00.780 | 99.99th=[11338] 00:30:00.780 bw ( KiB/s): min=36368, max=37224, per=100.00%, avg=36820.00, stdev=377.64, samples=4 00:30:00.780 iops : min= 9092, max= 9306, avg=9205.00, stdev=94.41, samples=4 00:30:00.780 lat (msec) : 2=0.03%, 4=0.09%, 10=99.74%, 20=0.14% 00:30:00.780 cpu : usr=61.90%, sys=34.71%, ctx=74, majf=0, minf=6 00:30:00.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:00.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.780 issued rwts: total=18451,18460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.780 00:30:00.780 Run status group 0 (all jobs): 00:30:00.780 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:30:00.780 WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:00.780 20:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.780 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:00.780 fio-3.35 00:30:00.780 Starting 1 thread 00:30:00.780 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.307 00:30:03.307 test: (groupid=0, jobs=1): err= 0: pid=4087012: Thu Jul 25 20:00:12 2024 00:30:03.307 read: IOPS=8444, BW=132MiB/s (138MB/s)(265MiB/2005msec) 00:30:03.307 slat (nsec): min=2900, max=95446, avg=3704.44, stdev=1765.79 00:30:03.307 clat (usec): min=3009, max=16432, avg=8669.71, stdev=1922.70 00:30:03.307 lat (usec): min=3013, max=16435, avg=8673.42, stdev=1922.73 00:30:03.307 clat percentiles (usec): 00:30:03.307 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7046], 00:30:03.307 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9110], 00:30:03.307 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11076], 95.00th=[11863], 00:30:03.307 | 99.00th=[13435], 99.50th=[14615], 99.90th=[16057], 99.95th=[16319], 00:30:03.307 | 99.99th=[16450] 00:30:03.307 bw ( KiB/s): min=60992, max=78944, per=51.97%, avg=70224.00, stdev=9726.75, samples=4 00:30:03.307 iops : min= 3812, max= 4934, avg=4389.00, stdev=607.92, samples=4 00:30:03.307 write: IOPS=5094, BW=79.6MiB/s (83.5MB/s)(144MiB/1804msec); 0 zone resets 00:30:03.307 slat (usec): min=30, max=191, avg=34.44, stdev= 5.87 00:30:03.307 clat (usec): min=3914, max=19629, avg=11220.59, stdev=1863.16 00:30:03.307 lat (usec): min=3948, max=19661, avg=11255.02, stdev=1863.49 00:30:03.307 clat percentiles (usec): 00:30:03.307 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:30:03.307 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:30:03.307 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13698], 95.00th=[14484], 00:30:03.307 | 99.00th=[16188], 99.50th=[16909], 99.90th=[19006], 99.95th=[19530], 00:30:03.307 | 99.99th=[19530] 00:30:03.307 bw ( KiB/s): min=63680, max=81248, per=89.68%, avg=73096.00, stdev=9318.77, samples=4 00:30:03.307 iops : min= 3980, max= 5078, avg=4568.50, stdev=582.42, samples=4 00:30:03.307 lat (msec) : 4=0.20%, 10=58.02%, 20=41.78% 00:30:03.307 cpu : usr=76.80%, sys=21.31%, ctx=38, majf=0, minf=2 00:30:03.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:03.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:03.307 issued rwts: total=16932,9190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:03.307 00:30:03.307 Run status group 0 (all jobs): 00:30:03.307 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (277MB), run=2005-2005msec 00:30:03.307 WRITE: bw=79.6MiB/s (83.5MB/s), 79.6MiB/s-79.6MiB/s (83.5MB/s-83.5MB/s), io=144MiB (151MB), run=1804-1804msec 00:30:03.307 20:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:03.564 20:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:06.861 Nvme0n1 00:30:06.861 20:00:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d5251db7-9e77-400b-bd1a-8bb3e8f568cd 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d5251db7-9e77-400b-bd1a-8bb3e8f568cd 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=d5251db7-9e77-400b-bd1a-8bb3e8f568cd 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:09.391 20:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:09.647 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:09.647 { 00:30:09.647 "uuid": "d5251db7-9e77-400b-bd1a-8bb3e8f568cd", 00:30:09.647 "name": "lvs_0", 00:30:09.647 "base_bdev": "Nvme0n1", 00:30:09.647 "total_data_clusters": 930, 00:30:09.647 "free_clusters": 930, 00:30:09.647 "block_size": 512, 00:30:09.647 "cluster_size": 1073741824 00:30:09.647 } 00:30:09.647 ]' 00:30:09.647 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d5251db7-9e77-400b-bd1a-8bb3e8f568cd") .free_clusters' 00:30:09.904 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:09.904 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d5251db7-9e77-400b-bd1a-8bb3e8f568cd") .cluster_size' 00:30:09.904 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:09.904 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:09.904 20:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:09.904 952320 00:30:09.904 20:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:10.161 ab4e414e-7acd-4e77-a6e6-c2d1e21c8c0b 00:30:10.161 20:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:10.418 20:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:10.675 20:00:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.934 20:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.193 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:11.193 fio-3.35 00:30:11.193 Starting 1 thread 00:30:11.193 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.720 00:30:13.720 test: (groupid=0, jobs=1): err= 0: pid=4088288: Thu Jul 25 20:00:22 2024 00:30:13.720 read: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2008msec) 00:30:13.720 slat (nsec): min=1915, max=166296, avg=2687.67, stdev=2381.16 00:30:13.720 clat (usec): min=1014, max=171119, avg=12070.05, stdev=11798.94 00:30:13.720 lat (usec): min=1017, max=171156, avg=12072.73, stdev=11799.30 00:30:13.720 clat percentiles (msec): 00:30:13.720 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:30:13.720 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:30:13.720 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:13.720 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:13.720 | 99.99th=[ 171] 00:30:13.720 bw ( KiB/s): min=16072, max=25760, per=99.70%, avg=23094.00, stdev=4687.53, samples=4 00:30:13.720 iops : min= 4018, max= 6440, avg=5773.50, stdev=1171.88, samples=4 00:30:13.720 write: IOPS=5770, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2008msec); 0 zone resets 00:30:13.720 slat (usec): min=2, max=134, avg= 2.78, stdev= 1.75 00:30:13.720 clat (usec): min=315, max=169096, avg=9858.92, stdev=11076.30 00:30:13.720 lat (usec): min=319, max=169103, avg=9861.71, stdev=11076.70 00:30:13.720 clat percentiles (msec): 00:30:13.720 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:30:13.720 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:30:13.720 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:30:13.720 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:30:13.720 | 99.99th=[ 169] 00:30:13.720 bw ( KiB/s): min=17064, max=25184, per=99.99%, avg=23082.00, stdev=4013.34, samples=4 00:30:13.720 iops : min= 4266, max= 6296, avg=5770.50, stdev=1003.33, samples=4 00:30:13.720 lat (usec) : 500=0.01%, 750=0.01% 00:30:13.720 lat (msec) : 2=0.04%, 4=0.10%, 10=49.21%, 20=50.08%, 250=0.55% 00:30:13.720 cpu : usr=60.54%, sys=37.07%, ctx=122, majf=0, minf=24 00:30:13.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:13.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.720 issued rwts: total=11628,11588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.720 00:30:13.720 Run status group 0 (all jobs): 00:30:13.720 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2008-2008msec 00:30:13.720 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2008-2008msec 00:30:13.720 20:00:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:13.977 20:00:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=668d4f90-56b7-4755-8750-1ebe230bc84c 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 668d4f90-56b7-4755-8750-1ebe230bc84c 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=668d4f90-56b7-4755-8750-1ebe230bc84c 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:15.374 { 00:30:15.374 "uuid": "d5251db7-9e77-400b-bd1a-8bb3e8f568cd", 00:30:15.374 "name": "lvs_0", 00:30:15.374 "base_bdev": "Nvme0n1", 00:30:15.374 "total_data_clusters": 930, 00:30:15.374 "free_clusters": 0, 00:30:15.374 "block_size": 512, 00:30:15.374 "cluster_size": 1073741824 00:30:15.374 }, 00:30:15.374 { 00:30:15.374 "uuid": "668d4f90-56b7-4755-8750-1ebe230bc84c", 00:30:15.374 "name": "lvs_n_0", 00:30:15.374 "base_bdev": "ab4e414e-7acd-4e77-a6e6-c2d1e21c8c0b", 00:30:15.374 "total_data_clusters": 237847, 00:30:15.374 "free_clusters": 237847, 00:30:15.374 "block_size": 512, 00:30:15.374 "cluster_size": 4194304 00:30:15.374 } 00:30:15.374 ]' 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="668d4f90-56b7-4755-8750-1ebe230bc84c") .free_clusters' 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="668d4f90-56b7-4755-8750-1ebe230bc84c") .cluster_size' 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:15.374 951388 00:30:15.374 20:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:15.945 18cd3f67-1b28-4c4b-93d4-8713aac8994e 00:30:15.945 20:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:16.205 20:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:16.463 20:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:16.720 20:00:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.979 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:16.979 fio-3.35 00:30:16.979 Starting 1 thread 00:30:16.979 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.513 00:30:19.513 test: (groupid=0, jobs=1): err= 0: pid=4089027: Thu Jul 25 20:00:28 2024 00:30:19.513 read: IOPS=5827, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2010msec) 00:30:19.513 slat (nsec): min=1952, max=142926, avg=2672.41, stdev=2164.43 00:30:19.513 clat (usec): min=4409, max=19853, avg=12046.94, stdev=1075.15 00:30:19.513 lat (usec): min=4427, max=19855, avg=12049.61, stdev=1075.05 00:30:19.513 clat percentiles (usec): 00:30:19.513 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:30:19.513 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:30:19.513 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:30:19.513 | 99.00th=[14353], 99.50th=[14615], 99.90th=[16909], 99.95th=[18482], 00:30:19.513 | 99.99th=[19792] 00:30:19.513 bw ( KiB/s): min=22192, max=23768, per=99.93%, avg=23292.00, stdev=737.40, samples=4 00:30:19.513 iops : min= 5548, max= 5942, avg=5823.00, stdev=184.35, samples=4 00:30:19.513 write: IOPS=5814, BW=22.7MiB/s (23.8MB/s)(45.7MiB/2010msec); 0 zone resets 00:30:19.513 slat (usec): min=2, max=146, avg= 2.78, stdev= 2.00 00:30:19.513 clat (usec): min=2177, max=18356, avg=9836.43, stdev=910.89 00:30:19.513 lat (usec): min=2182, max=18358, avg=9839.21, stdev=910.86 00:30:19.513 clat percentiles (usec): 00:30:19.513 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:30:19.513 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:30:19.513 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:30:19.513 | 99.00th=[11863], 99.50th=[12125], 99.90th=[15533], 99.95th=[16909], 00:30:19.513 | 99.99th=[16909] 00:30:19.513 bw ( KiB/s): min=23152, max=23360, per=99.98%, avg=23254.00, stdev=92.23, samples=4 00:30:19.513 iops : min= 5788, max= 5840, avg=5813.50, stdev=23.06, samples=4 00:30:19.513 lat (msec) : 4=0.05%, 10=30.11%, 20=69.84% 00:30:19.513 cpu : usr=56.60%, sys=40.87%, ctx=75, majf=0, minf=24 00:30:19.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:19.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.513 issued rwts: total=11713,11688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.513 00:30:19.513 Run status group 0 (all jobs): 00:30:19.513 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2010-2010msec 00:30:19.513 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.7MiB (47.9MB), run=2010-2010msec 00:30:19.513 20:00:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:19.513 20:00:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:19.513 20:00:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:23.700 20:00:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:23.700 20:00:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:26.981 20:00:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:26.981 20:00:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.882 rmmod nvme_tcp 00:30:28.882 rmmod nvme_fabrics 00:30:28.882 rmmod nvme_keyring 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4085701 ']' 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4085701 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 4085701 ']' 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 4085701 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4085701 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4085701' 00:30:28.882 killing process with pid 4085701 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 4085701 00:30:28.882 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 4085701 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.142 20:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.041 20:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:31.041 00:30:31.041 real 0m36.971s 00:30:31.041 user 2m22.020s 00:30:31.041 sys 0m6.922s 00:30:31.041 20:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:31.041 20:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 ************************************ 00:30:31.041 END TEST nvmf_fio_host 00:30:31.041 ************************************ 00:30:31.041 20:00:40 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:31.041 20:00:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:31.041 20:00:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:31.041 20:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 ************************************ 00:30:31.041 START TEST nvmf_failover 00:30:31.041 ************************************ 00:30:31.041 20:00:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:31.298 * Looking for test storage... 00:30:31.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.299 20:00:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:33.201 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:33.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:33.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:33.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:33.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.202 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:33.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:30:33.461 00:30:33.461 --- 10.0.0.2 ping statistics --- 00:30:33.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.461 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:30:33.461 00:30:33.461 --- 10.0.0.1 ping statistics --- 00:30:33.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.461 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4092269 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4092269 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4092269 ']' 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:33.461 20:00:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.461 [2024-07-25 20:00:42.726286] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:30:33.461 [2024-07-25 20:00:42.726369] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.461 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.461 [2024-07-25 20:00:42.806632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:33.719 [2024-07-25 20:00:42.900628] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.719 [2024-07-25 20:00:42.900685] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.719 [2024-07-25 20:00:42.900703] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.719 [2024-07-25 20:00:42.900716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.719 [2024-07-25 20:00:42.900727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.719 [2024-07-25 20:00:42.900823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.719 [2024-07-25 20:00:42.902082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.719 [2024-07-25 20:00:42.902086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.719 20:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:33.719 20:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:33.719 20:00:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:33.719 20:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:33.719 20:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.719 20:00:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.720 20:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:33.977 [2024-07-25 20:00:43.266330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.978 20:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:34.235 Malloc0 00:30:34.235 20:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.493 20:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.059 20:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.059 [2024-07-25 20:00:44.410335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.059 20:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:35.317 [2024-07-25 20:00:44.703164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:35.317 20:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:35.576 [2024-07-25 20:00:44.992075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4092558 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4092558 /var/tmp/bdevperf.sock 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4092558 ']' 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:35.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:35.835 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:36.094 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:36.094 20:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:36.094 20:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:36.352 NVMe0n1 00:30:36.352 20:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:36.925 00:30:36.925 20:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4092776 00:30:36.925 20:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:36.925 20:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:37.904 20:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.162 [2024-07-25 20:00:47.531853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.531975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.162 [2024-07-25 20:00:47.532468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 [2024-07-25 20:00:47.532740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4d50 is same with the state(5) to be set 00:30:38.163 20:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:41.446 20:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:41.703 00:30:41.703 20:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:42.267 [2024-07-25 20:00:51.395669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.395973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 [2024-07-25 20:00:51.396195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5bd0 is same with the state(5) to be set 00:30:42.267 20:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:45.549 20:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.549 [2024-07-25 20:00:54.660673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.549 20:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:46.483 20:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:46.740 20:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4092776 00:30:53.307 0 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4092558 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4092558 ']' 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4092558 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4092558 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4092558' 00:30:53.307 killing process with pid 4092558 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4092558 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4092558 00:30:53.307 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:53.307 [2024-07-25 20:00:45.056832] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:30:53.307 [2024-07-25 20:00:45.056908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092558 ] 00:30:53.307 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.307 [2024-07-25 20:00:45.121909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.307 [2024-07-25 20:00:45.208558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.307 Running I/O for 15 seconds... 00:30:53.307 [2024-07-25 20:00:47.534938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.308 [2024-07-25 20:00:47.534980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.308 [2024-07-25 20:00:47.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.308 [2024-07-25 20:00:47.535103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.308 [2024-07-25 20:00:47.535134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.308 [2024-07-25 20:00:47.535164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.535978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.535991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.536005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.536019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.536033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.536054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.536094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.536109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.536127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.308 [2024-07-25 20:00:47.536143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.308 [2024-07-25 20:00:47.536159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.536973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.536989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.309 [2024-07-25 20:00:47.537107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.309 [2024-07-25 20:00:47.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.309 [2024-07-25 20:00:47.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.309 [2024-07-25 20:00:47.537330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.309 [2024-07-25 20:00:47.537344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.537986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.537999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.538029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.538068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.538099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.310 [2024-07-25 20:00:47.538129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.310 [2024-07-25 20:00:47.538498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:30:53.310 [2024-07-25 20:00:47.538512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.310 [2024-07-25 20:00:47.538525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.310 [2024-07-25 20:00:47.538537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81680 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81688 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81696 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81704 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.538957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.538971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.538982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.538994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.311 [2024-07-25 20:00:47.539474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.311 [2024-07-25 20:00:47.539486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:30:53.311 [2024-07-25 20:00:47.539499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539557] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dc5b50 was disconnected and freed. reset controller. 00:30:53.311 [2024-07-25 20:00:47.539575] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:53.311 [2024-07-25 20:00:47.539608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.311 [2024-07-25 20:00:47.539626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.311 [2024-07-25 20:00:47.539654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.311 [2024-07-25 20:00:47.539681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.311 [2024-07-25 20:00:47.539707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.311 [2024-07-25 20:00:47.539720] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.311 [2024-07-25 20:00:47.539764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6eb0 (9): Bad file descriptor 00:30:53.311 [2024-07-25 20:00:47.543001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.311 [2024-07-25 20:00:47.620569] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:53.311 [2024-07-25 20:00:51.395078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.312 [2024-07-25 20:00:51.395153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.395173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.312 [2024-07-25 20:00:51.395187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.395201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.312 [2024-07-25 20:00:51.395214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.395228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.312 [2024-07-25 20:00:51.395242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.395255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6eb0 is same with the state(5) to be set 00:30:53.312 [2024-07-25 20:00:51.397827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.397853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.397895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.397911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.397928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.397942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.397956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.397970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.397985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.397998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.312 [2024-07-25 20:00:51.398669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.312 [2024-07-25 20:00:51.398684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.398977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.398990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.313 [2024-07-25 20:00:51.399020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.313 [2024-07-25 20:00:51.399831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.313 [2024-07-25 20:00:51.399844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.399859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.399873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.399888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.399902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.399916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.399930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.399948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.399962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.399977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.399991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.314 [2024-07-25 20:00:51.400437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110048 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110056 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110064 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110072 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110080 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110088 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110096 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110104 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110112 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110120 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.314 [2024-07-25 20:00:51.400959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.314 [2024-07-25 20:00:51.400970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.314 [2024-07-25 20:00:51.400981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110128 len:8 PRP1 0x0 PRP2 0x0 00:30:53.314 [2024-07-25 20:00:51.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110136 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110144 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110160 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110168 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110176 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110184 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110192 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110200 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110208 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110216 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110224 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110232 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110240 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110248 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110256 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110264 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110272 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110280 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.401957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110288 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.401970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.401983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.401994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.402005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110296 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.402018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.402031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.402041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.402053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110304 len:8 PRP1 0x0 PRP2 0x0 00:30:53.315 [2024-07-25 20:00:51.402071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.315 [2024-07-25 20:00:51.402085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.315 [2024-07-25 20:00:51.402096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.315 [2024-07-25 20:00:51.402108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110312 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110320 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110328 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110336 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110344 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110352 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109648 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.316 [2024-07-25 20:00:51.402434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.316 [2024-07-25 20:00:51.402446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109656 len:8 PRP1 0x0 PRP2 0x0 00:30:53.316 [2024-07-25 20:00:51.402458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:51.402519] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f705b0 was disconnected and freed. reset controller. 00:30:53.316 [2024-07-25 20:00:51.402536] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:53.316 [2024-07-25 20:00:51.402551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.316 [2024-07-25 20:00:51.405796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.316 [2024-07-25 20:00:51.405837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6eb0 (9): Bad file descriptor 00:30:53.316 [2024-07-25 20:00:51.444153] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:53.316 [2024-07-25 20:00:55.910220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.316 [2024-07-25 20:00:55.910891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.910975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.316 [2024-07-25 20:00:55.910989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.316 [2024-07-25 20:00:55.911002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.911974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.911989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.912002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.912017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.912031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.317 [2024-07-25 20:00:55.912054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.317 [2024-07-25 20:00:55.912075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.318 [2024-07-25 20:00:55.912751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.912978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.912991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.318 [2024-07-25 20:00:55.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.318 [2024-07-25 20:00:55.913247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.318 [2024-07-25 20:00:55.913261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.913979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.913993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.914021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.914049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.914087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.914117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.914146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.319 [2024-07-25 20:00:55.914174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dca6d0 is same with the state(5) to be set 00:30:53.319 [2024-07-25 20:00:55.914205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.319 [2024-07-25 20:00:55.914217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.319 [2024-07-25 20:00:55.914232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50632 len:8 PRP1 0x0 PRP2 0x0 00:30:53.319 [2024-07-25 20:00:55.914248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914307] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dca6d0 was disconnected and freed. reset controller. 00:30:53.319 [2024-07-25 20:00:55.914326] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:53.319 [2024-07-25 20:00:55.914359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.319 [2024-07-25 20:00:55.914377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.319 [2024-07-25 20:00:55.914405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.319 [2024-07-25 20:00:55.914432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.319 [2024-07-25 20:00:55.914458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.319 [2024-07-25 20:00:55.914471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.319 [2024-07-25 20:00:55.914521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6eb0 (9): Bad file descriptor 00:30:53.319 [2024-07-25 20:00:55.917784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.319 [2024-07-25 20:00:55.995609] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:53.319 00:30:53.320 Latency(us) 00:30:53.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.320 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:53.320 Verification LBA range: start 0x0 length 0x4000 00:30:53.320 NVMe0n1 : 15.05 8746.82 34.17 483.57 0.00 13802.88 546.13 45826.65 00:30:53.320 =================================================================================================================== 00:30:53.320 Total : 8746.82 34.17 483.57 0.00 13802.88 546.13 45826.65 00:30:53.320 Received shutdown signal, test time was about 15.000000 seconds 00:30:53.320 00:30:53.320 Latency(us) 00:30:53.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.320 =================================================================================================================== 00:30:53.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4094540 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4094540 /var/tmp/bdevperf.sock 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4094540 ']' 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:53.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:53.320 20:01:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.320 20:01:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:53.320 20:01:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:53.320 20:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:53.320 [2024-07-25 20:01:02.258015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:53.320 20:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:53.320 [2024-07-25 20:01:02.506717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:53.320 20:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.577 NVMe0n1 00:30:53.577 20:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.835 00:30:53.835 20:01:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.409 00:30:54.409 20:01:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.409 20:01:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:54.409 20:01:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.667 20:01:04 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:57.953 20:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.953 20:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:57.953 20:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4095199 00:30:57.953 20:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:57.953 20:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4095199 00:30:59.329 0 00:30:59.329 20:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:59.329 [2024-07-25 20:01:01.772476] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:30:59.329 [2024-07-25 20:01:01.772558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094540 ] 00:30:59.329 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.329 [2024-07-25 20:01:01.832768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.329 [2024-07-25 20:01:01.914942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.329 [2024-07-25 20:01:04.056172] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:59.329 [2024-07-25 20:01:04.056268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.329 [2024-07-25 20:01:04.056292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.329 [2024-07-25 20:01:04.056310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.329 [2024-07-25 20:01:04.056324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.329 [2024-07-25 20:01:04.056338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.329 [2024-07-25 20:01:04.056352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.329 [2024-07-25 20:01:04.056373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.329 [2024-07-25 20:01:04.056387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.329 [2024-07-25 20:01:04.056400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.329 [2024-07-25 20:01:04.056450] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.329 [2024-07-25 20:01:04.056486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806eb0 (9): Bad file descriptor 00:30:59.329 [2024-07-25 20:01:04.188226] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:59.329 Running I/O for 1 seconds... 00:30:59.329 00:30:59.329 Latency(us) 00:30:59.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.329 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:59.329 Verification LBA range: start 0x0 length 0x4000 00:30:59.329 NVMe0n1 : 1.01 8341.72 32.58 0.00 0.00 15279.34 3301.07 12039.21 00:30:59.329 =================================================================================================================== 00:30:59.329 Total : 8341.72 32.58 0.00 0.00 15279.34 3301.07 12039.21 00:30:59.329 20:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.329 20:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:59.329 20:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.587 20:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.587 20:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:59.845 20:01:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.104 20:01:09 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4094540 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4094540 ']' 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4094540 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4094540 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4094540' 00:31:03.457 killing process with pid 4094540 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4094540 00:31:03.457 20:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4094540 00:31:03.716 20:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:03.716 20:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.974 rmmod nvme_tcp 00:31:03.974 rmmod nvme_fabrics 00:31:03.974 rmmod nvme_keyring 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4092269 ']' 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4092269 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4092269 ']' 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4092269 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4092269 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4092269' 00:31:03.974 killing process with pid 4092269 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4092269 00:31:03.974 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4092269 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:04.232 20:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.761 20:01:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.761 00:31:06.761 real 0m35.153s 00:31:06.761 user 2m3.577s 00:31:06.761 sys 0m6.042s 00:31:06.761 20:01:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:06.761 20:01:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:06.761 ************************************ 00:31:06.761 END TEST nvmf_failover 00:31:06.761 ************************************ 00:31:06.761 20:01:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:06.761 20:01:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:06.761 20:01:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:06.761 20:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.761 ************************************ 00:31:06.761 START TEST nvmf_host_discovery 00:31:06.761 ************************************ 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:06.761 * Looking for test storage... 00:31:06.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:06.761 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.762 20:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:08.667 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:08.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:08.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:08.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:08.667 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:08.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:31:08.668 00:31:08.668 --- 10.0.0.2 ping statistics --- 00:31:08.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.668 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:31:08.668 00:31:08.668 --- 10.0.0.1 ping statistics --- 00:31:08.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.668 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4097800 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4097800 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 4097800 ']' 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:08.668 20:01:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.668 [2024-07-25 20:01:17.847791] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:08.668 [2024-07-25 20:01:17.847862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.668 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.668 [2024-07-25 20:01:17.919581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.668 [2024-07-25 20:01:18.012134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.668 [2024-07-25 20:01:18.012192] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.668 [2024-07-25 20:01:18.012207] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.668 [2024-07-25 20:01:18.012220] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.668 [2024-07-25 20:01:18.012230] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.668 [2024-07-25 20:01:18.012258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.926 [2024-07-25 20:01:18.146619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.926 [2024-07-25 20:01:18.154793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.926 null0 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.926 null1 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4097940 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4097940 /tmp/host.sock 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 4097940 ']' 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:08.926 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:08.926 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.927 [2024-07-25 20:01:18.230585] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:08.927 [2024-07-25 20:01:18.230660] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097940 ] 00:31:08.927 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.927 [2024-07-25 20:01:18.297629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.185 [2024-07-25 20:01:18.388358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.185 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.443 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.444 [2024-07-25 20:01:18.776458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.444 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:09.703 20:01:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:10.271 [2024-07-25 20:01:19.564793] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:10.271 [2024-07-25 20:01:19.564823] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:10.271 [2024-07-25 20:01:19.564848] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.271 [2024-07-25 20:01:19.653164] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:10.530 [2024-07-25 20:01:19.877335] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:10.531 [2024-07-25 20:01:19.877382] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:10.531 20:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.531 20:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:10.531 20:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.790 20:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.051 [2024-07-25 20:01:20.236718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:11.051 [2024-07-25 20:01:20.237437] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:11.051 [2024-07-25 20:01:20.237488] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:11.051 [2024-07-25 20:01:20.323937] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:11.051 20:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:11.051 [2024-07-25 20:01:20.387530] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:11.051 [2024-07-25 20:01:20.387553] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:11.051 [2024-07-25 20:01:20.387563] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.988 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.245 [2024-07-25 20:01:21.457055] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:12.245 [2024-07-25 20:01:21.457114] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:12.245 [2024-07-25 20:01:21.458413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.245 [2024-07-25 20:01:21.458461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.245 [2024-07-25 20:01:21.458479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.245 [2024-07-25 20:01:21.458493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.245 [2024-07-25 20:01:21.458523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.245 [2024-07-25 20:01:21.458539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.245 [2024-07-25 20:01:21.458554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.245 [2024-07-25 20:01:21.458585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.245 [2024-07-25 20:01:21.458599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.245 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:12.246 [2024-07-25 20:01:21.468427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.246 [2024-07-25 20:01:21.478473] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.478730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.478762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.478781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.478807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.478831] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.478847] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.478865] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.478888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 [2024-07-25 20:01:21.488560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.488780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.488807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.488824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.488846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.488879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.488896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.488911] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.488930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 [2024-07-25 20:01:21.498639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.498902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.498932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.498950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.498974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.498997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.499013] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.499028] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.499056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:12.246 [2024-07-25 20:01:21.508716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.508920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.508951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.508969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.508994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.509030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.509049] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.509077] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.509101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.246 [2024-07-25 20:01:21.518797] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.519048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.519084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.519101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.519123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.519157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.519175] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.519188] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.519207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 [2024-07-25 20:01:21.528879] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.529096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.529128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.529145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.529167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.529212] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.529231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.529245] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.529264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.246 [2024-07-25 20:01:21.538955] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.246 [2024-07-25 20:01:21.539134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.246 [2024-07-25 20:01:21.539162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ba450 with addr=10.0.0.2, port=4420 00:31:12.246 [2024-07-25 20:01:21.539177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba450 is same with the state(5) to be set 00:31:12.246 [2024-07-25 20:01:21.539199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ba450 (9): Bad file descriptor 00:31:12.246 [2024-07-25 20:01:21.539220] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.246 [2024-07-25 20:01:21.539234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.246 [2024-07-25 20:01:21.539247] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.246 [2024-07-25 20:01:21.539279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.246 [2024-07-25 20:01:21.545274] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:12.246 [2024-07-25 20:01:21.545305] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.246 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:12.247 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.504 20:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.438 [2024-07-25 20:01:22.802352] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:13.438 [2024-07-25 20:01:22.802390] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:13.438 [2024-07-25 20:01:22.802412] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.695 [2024-07-25 20:01:22.888702] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:13.957 [2024-07-25 20:01:23.150590] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.957 [2024-07-25 20:01:23.150630] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.957 request: 00:31:13.957 { 00:31:13.957 "name": "nvme", 00:31:13.957 "trtype": "tcp", 00:31:13.957 "traddr": "10.0.0.2", 00:31:13.957 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:13.957 "adrfam": "ipv4", 00:31:13.957 "trsvcid": "8009", 00:31:13.957 "wait_for_attach": true, 00:31:13.957 "method": "bdev_nvme_start_discovery", 00:31:13.957 "req_id": 1 00:31:13.957 } 00:31:13.957 Got JSON-RPC error response 00:31:13.957 response: 00:31:13.957 { 00:31:13.957 "code": -17, 00:31:13.957 "message": "File exists" 00:31:13.957 } 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.957 request: 00:31:13.957 { 00:31:13.957 "name": "nvme_second", 00:31:13.957 "trtype": "tcp", 00:31:13.957 "traddr": "10.0.0.2", 00:31:13.957 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:13.957 "adrfam": "ipv4", 00:31:13.957 "trsvcid": "8009", 00:31:13.957 "wait_for_attach": true, 00:31:13.957 "method": "bdev_nvme_start_discovery", 00:31:13.957 "req_id": 1 00:31:13.957 } 00:31:13.957 Got JSON-RPC error response 00:31:13.957 response: 00:31:13.957 { 00:31:13.957 "code": -17, 00:31:13.957 "message": "File exists" 00:31:13.957 } 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:13.957 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.958 20:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.334 [2024-07-25 20:01:24.370078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.334 [2024-07-25 20:01:24.370170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b6500 with addr=10.0.0.2, port=8010 00:31:15.334 [2024-07-25 20:01:24.370200] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:15.334 [2024-07-25 20:01:24.370216] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:15.334 [2024-07-25 20:01:24.370230] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:16.272 [2024-07-25 20:01:25.372597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.272 [2024-07-25 20:01:25.372651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b6500 with addr=10.0.0.2, port=8010 00:31:16.272 [2024-07-25 20:01:25.372680] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:16.272 [2024-07-25 20:01:25.372696] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:16.272 [2024-07-25 20:01:25.372710] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:17.207 [2024-07-25 20:01:26.374696] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:17.207 request: 00:31:17.207 { 00:31:17.207 "name": "nvme_second", 00:31:17.207 "trtype": "tcp", 00:31:17.207 "traddr": "10.0.0.2", 00:31:17.207 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:17.207 "adrfam": "ipv4", 00:31:17.207 "trsvcid": "8010", 00:31:17.207 "attach_timeout_ms": 3000, 00:31:17.207 "method": "bdev_nvme_start_discovery", 00:31:17.207 "req_id": 1 00:31:17.207 } 00:31:17.207 Got JSON-RPC error response 00:31:17.207 response: 00:31:17.207 { 00:31:17.207 "code": -110, 00:31:17.207 "message": "Connection timed out" 00:31:17.207 } 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4097940 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:17.207 rmmod nvme_tcp 00:31:17.207 rmmod nvme_fabrics 00:31:17.207 rmmod nvme_keyring 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4097800 ']' 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4097800 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 4097800 ']' 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 4097800 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4097800 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4097800' 00:31:17.207 killing process with pid 4097800 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 4097800 00:31:17.207 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 4097800 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:17.465 20:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.370 20:01:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.370 00:31:19.370 real 0m13.115s 00:31:19.370 user 0m19.016s 00:31:19.370 sys 0m2.773s 00:31:19.370 20:01:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:19.370 20:01:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.370 ************************************ 00:31:19.370 END TEST nvmf_host_discovery 00:31:19.370 ************************************ 00:31:19.370 20:01:28 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:19.370 20:01:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:19.370 20:01:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:19.370 20:01:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:19.629 ************************************ 00:31:19.629 START TEST nvmf_host_multipath_status 00:31:19.629 ************************************ 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:19.629 * Looking for test storage... 00:31:19.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.629 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:19.630 20:01:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:21.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:21.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:21.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.561 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:21.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:21.562 00:31:21.562 --- 10.0.0.2 ping statistics --- 00:31:21.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.562 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:31:21.562 00:31:21.562 --- 10.0.0.1 ping statistics --- 00:31:21.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.562 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4100963 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4100963 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 4100963 ']' 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:21.562 20:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.819 [2024-07-25 20:01:31.010959] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:21.819 [2024-07-25 20:01:31.011031] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.819 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.819 [2024-07-25 20:01:31.074268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:21.819 [2024-07-25 20:01:31.162159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.819 [2024-07-25 20:01:31.162214] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.819 [2024-07-25 20:01:31.162243] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.819 [2024-07-25 20:01:31.162254] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.819 [2024-07-25 20:01:31.162264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.820 [2024-07-25 20:01:31.162317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.820 [2024-07-25 20:01:31.162322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4100963 00:31:22.078 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:22.078 [2024-07-25 20:01:31.507102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.336 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:22.594 Malloc0 00:31:22.594 20:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:22.852 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.110 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.369 [2024-07-25 20:01:32.577143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.369 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:23.627 [2024-07-25 20:01:32.825842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4101246 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4101246 /var/tmp/bdevperf.sock 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 4101246 ']' 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:23.627 20:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:23.886 20:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:23.886 20:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:23.886 20:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:24.142 20:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:24.400 Nvme0n1 00:31:24.400 20:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:24.969 Nvme0n1 00:31:24.969 20:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:24.969 20:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:26.873 20:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:26.873 20:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:27.130 20:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:27.390 20:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:28.765 20:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:28.765 20:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.765 20:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.765 20:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.765 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.765 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.765 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.765 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:29.023 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:29.023 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:29.023 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.023 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.281 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.281 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.281 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.281 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.540 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.540 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.540 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.540 20:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.799 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.799 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.799 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.799 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.057 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.057 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:30.057 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:30.314 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:30.574 20:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:31.512 20:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:31.512 20:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:31.512 20:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.512 20:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.770 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.770 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:31.770 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.770 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.028 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.028 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.028 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.028 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.286 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.286 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.286 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.286 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.544 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.544 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.544 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.544 20:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.802 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.802 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.802 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.802 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.060 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.060 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:33.060 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:33.318 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:33.576 20:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:34.513 20:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:34.513 20:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:34.513 20:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.513 20:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.771 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.771 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:34.771 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.771 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.029 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.029 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.029 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.029 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.286 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.286 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.286 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.286 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.544 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.544 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.544 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.544 20:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.802 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.802 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.802 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.802 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.059 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.059 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:36.059 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:36.317 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:36.575 20:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:37.512 20:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:37.512 20:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:37.512 20:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.512 20:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.814 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.814 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:37.814 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.814 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.072 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.072 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.072 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.072 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.329 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.329 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.329 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.329 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.586 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.586 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.586 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.586 20:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.844 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.844 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:38.844 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.844 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.102 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.102 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:39.102 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:39.360 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:39.616 20:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:40.546 20:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:40.546 20:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:40.546 20:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.546 20:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.803 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.803 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:40.803 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.803 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.060 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.060 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.060 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.060 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.318 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.318 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.318 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.318 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.575 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.575 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:41.575 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.575 20:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.832 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.832 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:41.832 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.832 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:42.090 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.090 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:42.090 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:42.347 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:42.605 20:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:43.541 20:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:43.541 20:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:43.541 20:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.541 20:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.799 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.799 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:43.799 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.799 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.056 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.056 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.056 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.056 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.313 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.313 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.313 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.313 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.570 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.570 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:44.570 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.570 20:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.827 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.827 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:44.827 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.827 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:45.087 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.087 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:45.345 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:45.345 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:45.602 20:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:45.860 20:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:46.793 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:46.793 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:46.793 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.793 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:47.051 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.051 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:47.051 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.051 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.308 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.309 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.309 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.309 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.567 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.567 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.567 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.567 20:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.825 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.825 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:47.825 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.825 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:48.083 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.083 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:48.083 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.083 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.341 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.341 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:48.341 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:48.599 20:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.858 20:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:49.796 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:49.796 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:49.796 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.796 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:50.054 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.054 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:50.054 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.054 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.312 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.312 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.312 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.312 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.569 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.569 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.570 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.570 20:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.827 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.827 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.827 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.827 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:51.085 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.085 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:51.085 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.085 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.343 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.343 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:51.343 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.600 20:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:51.860 20:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.233 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.490 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.490 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.490 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.490 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.776 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.776 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.776 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.776 20:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:54.036 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.036 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:54.036 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.036 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:54.294 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.552 20:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:54.809 20:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.187 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.446 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.446 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.446 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.446 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.704 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.704 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.704 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.704 20:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.962 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.962 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.962 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.962 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.220 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.220 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:57.220 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.220 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4101246 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 4101246 ']' 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 4101246 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4101246 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4101246' 00:31:57.479 killing process with pid 4101246 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 4101246 00:31:57.479 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 4101246 00:31:57.479 Connection closed with partial response: 00:31:57.479 00:31:57.479 00:31:57.748 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4101246 00:31:57.748 20:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:57.748 [2024-07-25 20:01:32.887189] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:57.748 [2024-07-25 20:01:32.887277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101246 ] 00:31:57.748 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.748 [2024-07-25 20:01:32.948473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.748 [2024-07-25 20:01:33.033910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.748 Running I/O for 90 seconds... 00:31:57.748 [2024-07-25 20:01:48.595315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.748 [2024-07-25 20:01:48.595405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:57.748 [2024-07-25 20:01:48.595468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.748 [2024-07-25 20:01:48.595489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:57.748 [2024-07-25 20:01:48.595514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.748 [2024-07-25 20:01:48.595530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:57.748 [2024-07-25 20:01:48.595553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.748 [2024-07-25 20:01:48.595570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.595592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.595608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.595630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.595646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.595669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.595685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.595718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.595735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.595756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.595773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.597967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.597985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.749 [2024-07-25 20:01:48.598703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.749 [2024-07-25 20:01:48.598728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.598761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.598809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.598827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.598855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.598874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.598902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.598920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.598948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.598966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.598995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.750 [2024-07-25 20:01:48.599914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.599959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.599987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.750 [2024-07-25 20:01:48.600514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.750 [2024-07-25 20:01:48.600541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.600954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.600971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.601166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.601222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.601274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.601324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:01:48.601390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:01:48.601721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.751 [2024-07-25 20:01:48.601739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.751 [2024-07-25 20:02:04.197617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.751 [2024-07-25 20:02:04.197634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.197974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.197992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.198575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.198615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.752 [2024-07-25 20:02:04.198632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.752 [2024-07-25 20:02:04.200641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.752 [2024-07-25 20:02:04.200698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.752 [2024-07-25 20:02:04.200738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.752 [2024-07-25 20:02:04.200777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.752 [2024-07-25 20:02:04.200817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.752 [2024-07-25 20:02:04.200839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.752 [2024-07-25 20:02:04.200855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.200877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.200894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.200917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.200933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.200955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.200972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.200994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.201011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.201073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.201117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.201162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.201203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.201244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.201284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.201324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.201380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.201418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.201441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.753 [2024-07-25 20:02:04.201458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.753 [2024-07-25 20:02:04.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.753 [2024-07-25 20:02:04.202756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.202774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.202796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.202813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.202852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.202869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.202892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.202908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.202930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.202947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.202969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.202990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.203423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.203960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.203983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.204000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.204022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.204038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.204087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.204105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.204129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.204151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.206746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.206787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.206829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.754 [2024-07-25 20:02:04.206849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.206875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.206892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:57.754 [2024-07-25 20:02:04.206916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.754 [2024-07-25 20:02:04.206932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.206955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.206973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.206996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.207960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.207982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.207999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.208054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.208107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.208148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.755 [2024-07-25 20:02:04.208188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.208229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.208269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.208314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.755 [2024-07-25 20:02:04.208338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.755 [2024-07-25 20:02:04.208371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.208396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.208413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.208435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.208452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.208474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.208491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.208531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.210967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.210988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.211790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.211974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.756 [2024-07-25 20:02:04.211995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.212019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.212037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.213279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.213311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.213340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.756 [2024-07-25 20:02:04.213359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.756 [2024-07-25 20:02:04.213383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.213907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.213966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.213982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.214246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.214287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.214331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.214372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.214636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.757 [2024-07-25 20:02:04.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.215345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.215398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.215417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.215441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.215458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.215481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.215514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:57.757 [2024-07-25 20:02:04.215543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.757 [2024-07-25 20:02:04.215560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.215614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.215652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.215690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.215962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.215979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.216001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.216018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.216075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.216095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.216120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.216137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.758 [2024-07-25 20:02:04.217897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.758 [2024-07-25 20:02:04.217937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.758 [2024-07-25 20:02:04.217959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.217976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.217999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.218826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.218902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.221508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.221570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.221881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.221937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.221977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.221999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.759 [2024-07-25 20:02:04.222015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.222037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.222081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.222107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.759 [2024-07-25 20:02:04.222124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.759 [2024-07-25 20:02:04.222147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.222764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.222953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.222971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.760 [2024-07-25 20:02:04.224706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.224746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.224785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.224824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.224863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.224917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.224994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.225011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.225050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.760 [2024-07-25 20:02:04.225076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.760 [2024-07-25 20:02:04.225101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.225239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.225285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.225461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.225515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.225553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.225575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.225591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.226238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.226285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.226327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.226383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.226428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.226482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.226520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.226557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.226579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.226596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.227751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.227813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.227855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.227895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.227935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.227976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.227999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.228016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.228071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.228116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.228155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.228196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.228235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.228316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.228356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.761 [2024-07-25 20:02:04.228396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.761 [2024-07-25 20:02:04.228436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.761 [2024-07-25 20:02:04.228459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.228630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.228669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.228744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.228971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.228992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.229008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.229030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.229069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.229118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.229142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.229159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.231485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.231637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.231677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.231733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.231774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.231970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.231987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.232026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.232092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.232189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.232229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.232269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.762 [2024-07-25 20:02:04.232309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.762 [2024-07-25 20:02:04.232349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.762 [2024-07-25 20:02:04.232372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.232445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.232502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.232681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.232721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.232802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.232974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.232991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.233937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.233976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.233993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.234015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.234031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.234079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.234099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.234122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.234140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.234163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.234180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.234203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.763 [2024-07-25 20:02:04.234221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.235798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.235824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.235853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.235871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.235895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.235913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.763 [2024-07-25 20:02:04.235936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.763 [2024-07-25 20:02:04.235954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.235977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.235994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.236771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.236968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.764 [2024-07-25 20:02:04.237008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.237031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.237049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.237083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.237102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.237126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.237143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.238813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.238839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.238867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.238885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.238908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.238925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.238949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.238966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.238989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.239006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.239029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.239070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.239098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.239116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.239139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.239156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.239179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.764 [2024-07-25 20:02:04.239197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.764 [2024-07-25 20:02:04.239220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.239743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.239798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.239838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.239876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.239916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.239955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.239977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.240010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.240033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.240071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.240100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.240118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.240154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.240172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.240200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.240218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.242845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.242870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.242914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.242933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.242955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.242972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.242994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.243132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.243174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.243213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.765 [2024-07-25 20:02:04.243478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.765 [2024-07-25 20:02:04.243543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.765 [2024-07-25 20:02:04.243560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.243600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.243641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.243682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.243722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.243763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.243822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.243876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.243920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.243976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.243999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.244124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.244165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.244522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.244555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.245742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.245782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.245810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.245844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.245869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.245886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.245910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.245928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.245951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.245968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.245992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.246333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.246372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.246436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.766 [2024-07-25 20:02:04.246493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.766 [2024-07-25 20:02:04.246549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.766 [2024-07-25 20:02:04.246573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.246590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.246631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.246671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.246711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.246752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.246793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.246853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.246895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.246949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.246972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.246989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.248665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.248964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.248989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.249006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.249030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.767 [2024-07-25 20:02:04.249065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.249092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.767 [2024-07-25 20:02:04.249109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.767 [2024-07-25 20:02:04.250199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.250399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.250484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.250587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.250670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.250693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.250711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.251582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.251662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.251703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.251743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.251783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.251823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.251880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.251936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.251960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.251993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.252173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.252489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.252532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.252572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.252613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.252637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.252655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.253690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.253713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.253757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.253778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.253801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.768 [2024-07-25 20:02:04.253818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:57.768 [2024-07-25 20:02:04.253840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.768 [2024-07-25 20:02:04.253857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:57.769 [2024-07-25 20:02:04.253879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.769 [2024-07-25 20:02:04.253894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:57.769 [2024-07-25 20:02:04.253916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.769 [2024-07-25 20:02:04.253933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:57.769 [2024-07-25 20:02:04.253955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.769 [2024-07-25 20:02:04.253971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:57.769 [2024-07-25 20:02:04.253993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.769 [2024-07-25 20:02:04.254009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.769 [2024-07-25 20:02:04.254031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.769 [2024-07-25 20:02:04.254082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.769 [2024-07-25 20:02:04.254108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.769 [2024-07-25 20:02:04.254125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.769 Received shutdown signal, test time was about 32.342697 seconds 00:31:57.769 00:31:57.769 Latency(us) 00:31:57.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.769 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:57.769 Verification LBA range: start 0x0 length 0x4000 00:31:57.769 Nvme0n1 : 32.34 8077.20 31.55 0.00 0.00 15819.44 1116.54 4026531.84 00:31:57.769 =================================================================================================================== 00:31:57.769 Total : 8077.20 31.55 0.00 0.00 15819.44 1116.54 4026531.84 00:31:57.769 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.029 rmmod nvme_tcp 00:31:58.029 rmmod nvme_fabrics 00:31:58.029 rmmod nvme_keyring 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4100963 ']' 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4100963 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 4100963 ']' 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 4100963 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4100963 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4100963' 00:31:58.029 killing process with pid 4100963 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 4100963 00:31:58.029 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 4100963 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.288 20:02:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.827 20:02:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:00.827 00:32:00.827 real 0m40.832s 00:32:00.827 user 2m3.128s 00:32:00.827 sys 0m10.515s 00:32:00.827 20:02:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:00.827 20:02:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:00.827 ************************************ 00:32:00.827 END TEST nvmf_host_multipath_status 00:32:00.827 ************************************ 00:32:00.827 20:02:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:00.827 20:02:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:00.827 20:02:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:00.827 20:02:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.827 ************************************ 00:32:00.827 START TEST nvmf_discovery_remove_ifc 00:32:00.827 ************************************ 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:00.827 * Looking for test storage... 00:32:00.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:00.827 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:00.828 20:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:02.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:02.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:02.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:02.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.731 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:02.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:32:02.731 00:32:02.731 --- 10.0.0.2 ping statistics --- 00:32:02.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.732 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:32:02.732 00:32:02.732 --- 10.0.0.1 ping statistics --- 00:32:02.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.732 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4107321 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4107321 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 4107321 ']' 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:02.732 20:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.732 [2024-07-25 20:02:11.941124] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:32:02.732 [2024-07-25 20:02:11.941212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.732 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.732 [2024-07-25 20:02:12.004696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.732 [2024-07-25 20:02:12.086839] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.732 [2024-07-25 20:02:12.086910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.732 [2024-07-25 20:02:12.086938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.732 [2024-07-25 20:02:12.086949] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.732 [2024-07-25 20:02:12.086959] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.732 [2024-07-25 20:02:12.086984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.991 [2024-07-25 20:02:12.233472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.991 [2024-07-25 20:02:12.241639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:02.991 null0 00:32:02.991 [2024-07-25 20:02:12.273565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4107415 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4107415 /tmp/host.sock 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 4107415 ']' 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:02.991 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:02.991 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.991 [2024-07-25 20:02:12.343622] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:32:02.991 [2024-07-25 20:02:12.343699] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107415 ] 00:32:02.991 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.991 [2024-07-25 20:02:12.404984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.250 [2024-07-25 20:02:12.492314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.250 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:03.250 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:03.250 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.250 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.251 20:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.628 [2024-07-25 20:02:13.712216] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:04.628 [2024-07-25 20:02:13.712257] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:04.628 [2024-07-25 20:02:13.712284] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:04.628 [2024-07-25 20:02:13.799567] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:04.628 [2024-07-25 20:02:13.982756] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:04.628 [2024-07-25 20:02:13.982833] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:04.628 [2024-07-25 20:02:13.982877] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:04.628 [2024-07-25 20:02:13.982907] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:04.628 [2024-07-25 20:02:13.982951] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.628 20:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.628 [2024-07-25 20:02:13.989610] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14bbdf0 was disconnected and freed. delete nvme_qpair. 00:32:04.628 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.628 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:04.628 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:04.628 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.887 20:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.822 20:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.756 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.756 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.756 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.757 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.757 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.757 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.757 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.015 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.015 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.015 20:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.950 20:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.886 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.145 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.145 20:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.080 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:10.081 20:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.081 [2024-07-25 20:02:19.423931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:10.081 [2024-07-25 20:02:19.424021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.081 [2024-07-25 20:02:19.424043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.081 [2024-07-25 20:02:19.424081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.081 [2024-07-25 20:02:19.424096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.081 [2024-07-25 20:02:19.424110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.081 [2024-07-25 20:02:19.424122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.081 [2024-07-25 20:02:19.424136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.081 [2024-07-25 20:02:19.424148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.081 [2024-07-25 20:02:19.424162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.081 [2024-07-25 20:02:19.424175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.081 [2024-07-25 20:02:19.424187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482f80 is same with the state(5) to be set 00:32:10.081 [2024-07-25 20:02:19.433946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482f80 (9): Bad file descriptor 00:32:10.081 [2024-07-25 20:02:19.443993] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.021 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.281 [2024-07-25 20:02:20.493131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:11.281 [2024-07-25 20:02:20.493201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482f80 with addr=10.0.0.2, port=4420 00:32:11.281 [2024-07-25 20:02:20.493230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482f80 is same with the state(5) to be set 00:32:11.281 [2024-07-25 20:02:20.493284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482f80 (9): Bad file descriptor 00:32:11.281 [2024-07-25 20:02:20.493750] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:11.281 [2024-07-25 20:02:20.493785] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.281 [2024-07-25 20:02:20.493803] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.281 [2024-07-25 20:02:20.493829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.281 [2024-07-25 20:02:20.493863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.281 [2024-07-25 20:02:20.493884] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.281 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.281 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:11.281 20:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.214 [2024-07-25 20:02:21.496383] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.214 [2024-07-25 20:02:21.496417] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.214 [2024-07-25 20:02:21.496434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.214 [2024-07-25 20:02:21.496449] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:12.214 [2024-07-25 20:02:21.496473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.214 [2024-07-25 20:02:21.496515] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:12.214 [2024-07-25 20:02:21.496558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.214 [2024-07-25 20:02:21.496581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.214 [2024-07-25 20:02:21.496600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.214 [2024-07-25 20:02:21.496615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.214 [2024-07-25 20:02:21.496630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.214 [2024-07-25 20:02:21.496645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.214 [2024-07-25 20:02:21.496660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.214 [2024-07-25 20:02:21.496675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.214 [2024-07-25 20:02:21.496690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.214 [2024-07-25 20:02:21.496706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.214 [2024-07-25 20:02:21.496720] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:12.214 [2024-07-25 20:02:21.497116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482410 (9): Bad file descriptor 00:32:12.214 [2024-07-25 20:02:21.498132] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:12.214 [2024-07-25 20:02:21.498154] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:12.214 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:12.215 20:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:13.590 20:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.156 [2024-07-25 20:02:23.506959] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:14.156 [2024-07-25 20:02:23.506990] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:14.156 [2024-07-25 20:02:23.507016] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.413 [2024-07-25 20:02:23.634513] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.413 [2024-07-25 20:02:23.697418] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:14.413 [2024-07-25 20:02:23.697467] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:14.413 [2024-07-25 20:02:23.697499] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:14.413 [2024-07-25 20:02:23.697523] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:14.413 [2024-07-25 20:02:23.697536] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:14.413 [2024-07-25 20:02:23.705536] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x148fd30 was disconnected and freed. delete nvme_qpair. 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:14.413 20:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.349 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4107415 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 4107415 ']' 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 4107415 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:15.350 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4107415 00:32:15.608 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:15.608 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:15.608 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4107415' 00:32:15.608 killing process with pid 4107415 00:32:15.608 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 4107415 00:32:15.608 20:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 4107415 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:15.608 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:15.608 rmmod nvme_tcp 00:32:15.608 rmmod nvme_fabrics 00:32:15.868 rmmod nvme_keyring 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4107321 ']' 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4107321 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 4107321 ']' 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 4107321 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4107321 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4107321' 00:32:15.868 killing process with pid 4107321 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 4107321 00:32:15.868 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 4107321 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.129 20:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.037 20:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:18.037 00:32:18.037 real 0m17.681s 00:32:18.037 user 0m25.573s 00:32:18.037 sys 0m3.096s 00:32:18.037 20:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:18.037 20:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.037 ************************************ 00:32:18.037 END TEST nvmf_discovery_remove_ifc 00:32:18.037 ************************************ 00:32:18.037 20:02:27 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:18.037 20:02:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:18.037 20:02:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:18.037 20:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.037 ************************************ 00:32:18.037 START TEST nvmf_identify_kernel_target 00:32:18.037 ************************************ 00:32:18.037 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:18.296 * Looking for test storage... 00:32:18.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:18.296 20:02:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.199 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:32:20.200 00:32:20.200 --- 10.0.0.2 ping statistics --- 00:32:20.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.200 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:32:20.200 00:32:20.200 --- 10.0.0.1 ping statistics --- 00:32:20.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.200 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:20.200 20:02:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:21.577 Waiting for block devices as requested 00:32:21.577 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:21.577 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:21.577 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:21.577 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:21.836 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:21.836 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:21.836 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:21.836 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:21.836 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.096 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:22.096 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:22.096 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.354 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.354 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.354 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.354 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.612 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:22.612 No valid GPT data, bailing 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:22.612 20:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:22.612 00:32:22.612 Discovery Log Number of Records 2, Generation counter 2 00:32:22.612 =====Discovery Log Entry 0====== 00:32:22.612 trtype: tcp 00:32:22.612 adrfam: ipv4 00:32:22.612 subtype: current discovery subsystem 00:32:22.612 treq: not specified, sq flow control disable supported 00:32:22.612 portid: 1 00:32:22.612 trsvcid: 4420 00:32:22.612 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:22.612 traddr: 10.0.0.1 00:32:22.612 eflags: none 00:32:22.612 sectype: none 00:32:22.612 =====Discovery Log Entry 1====== 00:32:22.612 trtype: tcp 00:32:22.612 adrfam: ipv4 00:32:22.612 subtype: nvme subsystem 00:32:22.612 treq: not specified, sq flow control disable supported 00:32:22.612 portid: 1 00:32:22.612 trsvcid: 4420 00:32:22.612 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:22.612 traddr: 10.0.0.1 00:32:22.612 eflags: none 00:32:22.612 sectype: none 00:32:22.871 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:22.871 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:22.871 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.871 ===================================================== 00:32:22.871 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:22.871 ===================================================== 00:32:22.871 Controller Capabilities/Features 00:32:22.871 ================================ 00:32:22.871 Vendor ID: 0000 00:32:22.871 Subsystem Vendor ID: 0000 00:32:22.871 Serial Number: 9450e1dc0c22d17f0491 00:32:22.871 Model Number: Linux 00:32:22.871 Firmware Version: 6.7.0-68 00:32:22.871 Recommended Arb Burst: 0 00:32:22.871 IEEE OUI Identifier: 00 00 00 00:32:22.871 Multi-path I/O 00:32:22.871 May have multiple subsystem ports: No 00:32:22.871 May have multiple controllers: No 00:32:22.871 Associated with SR-IOV VF: No 00:32:22.871 Max Data Transfer Size: Unlimited 00:32:22.871 Max Number of Namespaces: 0 00:32:22.871 Max Number of I/O Queues: 1024 00:32:22.871 NVMe Specification Version (VS): 1.3 00:32:22.871 NVMe Specification Version (Identify): 1.3 00:32:22.871 Maximum Queue Entries: 1024 00:32:22.871 Contiguous Queues Required: No 00:32:22.871 Arbitration Mechanisms Supported 00:32:22.871 Weighted Round Robin: Not Supported 00:32:22.871 Vendor Specific: Not Supported 00:32:22.871 Reset Timeout: 7500 ms 00:32:22.871 Doorbell Stride: 4 bytes 00:32:22.871 NVM Subsystem Reset: Not Supported 00:32:22.871 Command Sets Supported 00:32:22.871 NVM Command Set: Supported 00:32:22.871 Boot Partition: Not Supported 00:32:22.871 Memory Page Size Minimum: 4096 bytes 00:32:22.871 Memory Page Size Maximum: 4096 bytes 00:32:22.871 Persistent Memory Region: Not Supported 00:32:22.871 Optional Asynchronous Events Supported 00:32:22.871 Namespace Attribute Notices: Not Supported 00:32:22.871 Firmware Activation Notices: Not Supported 00:32:22.871 ANA Change Notices: Not Supported 00:32:22.871 PLE Aggregate Log Change Notices: Not Supported 00:32:22.871 LBA Status Info Alert Notices: Not Supported 00:32:22.871 EGE Aggregate Log Change Notices: Not Supported 00:32:22.871 Normal NVM Subsystem Shutdown event: Not Supported 00:32:22.871 Zone Descriptor Change Notices: Not Supported 00:32:22.871 Discovery Log Change Notices: Supported 00:32:22.871 Controller Attributes 00:32:22.871 128-bit Host Identifier: Not Supported 00:32:22.871 Non-Operational Permissive Mode: Not Supported 00:32:22.871 NVM Sets: Not Supported 00:32:22.871 Read Recovery Levels: Not Supported 00:32:22.871 Endurance Groups: Not Supported 00:32:22.871 Predictable Latency Mode: Not Supported 00:32:22.871 Traffic Based Keep ALive: Not Supported 00:32:22.871 Namespace Granularity: Not Supported 00:32:22.871 SQ Associations: Not Supported 00:32:22.871 UUID List: Not Supported 00:32:22.871 Multi-Domain Subsystem: Not Supported 00:32:22.871 Fixed Capacity Management: Not Supported 00:32:22.871 Variable Capacity Management: Not Supported 00:32:22.871 Delete Endurance Group: Not Supported 00:32:22.871 Delete NVM Set: Not Supported 00:32:22.871 Extended LBA Formats Supported: Not Supported 00:32:22.871 Flexible Data Placement Supported: Not Supported 00:32:22.871 00:32:22.871 Controller Memory Buffer Support 00:32:22.871 ================================ 00:32:22.871 Supported: No 00:32:22.871 00:32:22.871 Persistent Memory Region Support 00:32:22.871 ================================ 00:32:22.871 Supported: No 00:32:22.871 00:32:22.871 Admin Command Set Attributes 00:32:22.871 ============================ 00:32:22.871 Security Send/Receive: Not Supported 00:32:22.871 Format NVM: Not Supported 00:32:22.871 Firmware Activate/Download: Not Supported 00:32:22.871 Namespace Management: Not Supported 00:32:22.871 Device Self-Test: Not Supported 00:32:22.871 Directives: Not Supported 00:32:22.871 NVMe-MI: Not Supported 00:32:22.871 Virtualization Management: Not Supported 00:32:22.871 Doorbell Buffer Config: Not Supported 00:32:22.871 Get LBA Status Capability: Not Supported 00:32:22.871 Command & Feature Lockdown Capability: Not Supported 00:32:22.871 Abort Command Limit: 1 00:32:22.871 Async Event Request Limit: 1 00:32:22.871 Number of Firmware Slots: N/A 00:32:22.871 Firmware Slot 1 Read-Only: N/A 00:32:22.871 Firmware Activation Without Reset: N/A 00:32:22.871 Multiple Update Detection Support: N/A 00:32:22.871 Firmware Update Granularity: No Information Provided 00:32:22.871 Per-Namespace SMART Log: No 00:32:22.871 Asymmetric Namespace Access Log Page: Not Supported 00:32:22.871 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:22.871 Command Effects Log Page: Not Supported 00:32:22.871 Get Log Page Extended Data: Supported 00:32:22.871 Telemetry Log Pages: Not Supported 00:32:22.871 Persistent Event Log Pages: Not Supported 00:32:22.871 Supported Log Pages Log Page: May Support 00:32:22.871 Commands Supported & Effects Log Page: Not Supported 00:32:22.871 Feature Identifiers & Effects Log Page:May Support 00:32:22.871 NVMe-MI Commands & Effects Log Page: May Support 00:32:22.871 Data Area 4 for Telemetry Log: Not Supported 00:32:22.871 Error Log Page Entries Supported: 1 00:32:22.871 Keep Alive: Not Supported 00:32:22.871 00:32:22.872 NVM Command Set Attributes 00:32:22.872 ========================== 00:32:22.872 Submission Queue Entry Size 00:32:22.872 Max: 1 00:32:22.872 Min: 1 00:32:22.872 Completion Queue Entry Size 00:32:22.872 Max: 1 00:32:22.872 Min: 1 00:32:22.872 Number of Namespaces: 0 00:32:22.872 Compare Command: Not Supported 00:32:22.872 Write Uncorrectable Command: Not Supported 00:32:22.872 Dataset Management Command: Not Supported 00:32:22.872 Write Zeroes Command: Not Supported 00:32:22.872 Set Features Save Field: Not Supported 00:32:22.872 Reservations: Not Supported 00:32:22.872 Timestamp: Not Supported 00:32:22.872 Copy: Not Supported 00:32:22.872 Volatile Write Cache: Not Present 00:32:22.872 Atomic Write Unit (Normal): 1 00:32:22.872 Atomic Write Unit (PFail): 1 00:32:22.872 Atomic Compare & Write Unit: 1 00:32:22.872 Fused Compare & Write: Not Supported 00:32:22.872 Scatter-Gather List 00:32:22.872 SGL Command Set: Supported 00:32:22.872 SGL Keyed: Not Supported 00:32:22.872 SGL Bit Bucket Descriptor: Not Supported 00:32:22.872 SGL Metadata Pointer: Not Supported 00:32:22.872 Oversized SGL: Not Supported 00:32:22.872 SGL Metadata Address: Not Supported 00:32:22.872 SGL Offset: Supported 00:32:22.872 Transport SGL Data Block: Not Supported 00:32:22.872 Replay Protected Memory Block: Not Supported 00:32:22.872 00:32:22.872 Firmware Slot Information 00:32:22.872 ========================= 00:32:22.872 Active slot: 0 00:32:22.872 00:32:22.872 00:32:22.872 Error Log 00:32:22.872 ========= 00:32:22.872 00:32:22.872 Active Namespaces 00:32:22.872 ================= 00:32:22.872 Discovery Log Page 00:32:22.872 ================== 00:32:22.872 Generation Counter: 2 00:32:22.872 Number of Records: 2 00:32:22.872 Record Format: 0 00:32:22.872 00:32:22.872 Discovery Log Entry 0 00:32:22.872 ---------------------- 00:32:22.872 Transport Type: 3 (TCP) 00:32:22.872 Address Family: 1 (IPv4) 00:32:22.872 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:22.872 Entry Flags: 00:32:22.872 Duplicate Returned Information: 0 00:32:22.872 Explicit Persistent Connection Support for Discovery: 0 00:32:22.872 Transport Requirements: 00:32:22.872 Secure Channel: Not Specified 00:32:22.872 Port ID: 1 (0x0001) 00:32:22.872 Controller ID: 65535 (0xffff) 00:32:22.872 Admin Max SQ Size: 32 00:32:22.872 Transport Service Identifier: 4420 00:32:22.872 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:22.872 Transport Address: 10.0.0.1 00:32:22.872 Discovery Log Entry 1 00:32:22.872 ---------------------- 00:32:22.872 Transport Type: 3 (TCP) 00:32:22.872 Address Family: 1 (IPv4) 00:32:22.872 Subsystem Type: 2 (NVM Subsystem) 00:32:22.872 Entry Flags: 00:32:22.872 Duplicate Returned Information: 0 00:32:22.872 Explicit Persistent Connection Support for Discovery: 0 00:32:22.872 Transport Requirements: 00:32:22.872 Secure Channel: Not Specified 00:32:22.872 Port ID: 1 (0x0001) 00:32:22.872 Controller ID: 65535 (0xffff) 00:32:22.872 Admin Max SQ Size: 32 00:32:22.872 Transport Service Identifier: 4420 00:32:22.872 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:22.872 Transport Address: 10.0.0.1 00:32:22.872 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:22.872 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.872 get_feature(0x01) failed 00:32:22.872 get_feature(0x02) failed 00:32:22.872 get_feature(0x04) failed 00:32:22.872 ===================================================== 00:32:22.872 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:22.872 ===================================================== 00:32:22.872 Controller Capabilities/Features 00:32:22.872 ================================ 00:32:22.872 Vendor ID: 0000 00:32:22.872 Subsystem Vendor ID: 0000 00:32:22.872 Serial Number: 9c41674190da58e8bf6e 00:32:22.872 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:22.872 Firmware Version: 6.7.0-68 00:32:22.872 Recommended Arb Burst: 6 00:32:22.872 IEEE OUI Identifier: 00 00 00 00:32:22.872 Multi-path I/O 00:32:22.872 May have multiple subsystem ports: Yes 00:32:22.872 May have multiple controllers: Yes 00:32:22.872 Associated with SR-IOV VF: No 00:32:22.872 Max Data Transfer Size: Unlimited 00:32:22.872 Max Number of Namespaces: 1024 00:32:22.872 Max Number of I/O Queues: 128 00:32:22.872 NVMe Specification Version (VS): 1.3 00:32:22.872 NVMe Specification Version (Identify): 1.3 00:32:22.872 Maximum Queue Entries: 1024 00:32:22.872 Contiguous Queues Required: No 00:32:22.872 Arbitration Mechanisms Supported 00:32:22.872 Weighted Round Robin: Not Supported 00:32:22.872 Vendor Specific: Not Supported 00:32:22.872 Reset Timeout: 7500 ms 00:32:22.872 Doorbell Stride: 4 bytes 00:32:22.872 NVM Subsystem Reset: Not Supported 00:32:22.872 Command Sets Supported 00:32:22.872 NVM Command Set: Supported 00:32:22.872 Boot Partition: Not Supported 00:32:22.872 Memory Page Size Minimum: 4096 bytes 00:32:22.872 Memory Page Size Maximum: 4096 bytes 00:32:22.872 Persistent Memory Region: Not Supported 00:32:22.872 Optional Asynchronous Events Supported 00:32:22.872 Namespace Attribute Notices: Supported 00:32:22.872 Firmware Activation Notices: Not Supported 00:32:22.872 ANA Change Notices: Supported 00:32:22.872 PLE Aggregate Log Change Notices: Not Supported 00:32:22.872 LBA Status Info Alert Notices: Not Supported 00:32:22.872 EGE Aggregate Log Change Notices: Not Supported 00:32:22.872 Normal NVM Subsystem Shutdown event: Not Supported 00:32:22.872 Zone Descriptor Change Notices: Not Supported 00:32:22.872 Discovery Log Change Notices: Not Supported 00:32:22.872 Controller Attributes 00:32:22.872 128-bit Host Identifier: Supported 00:32:22.872 Non-Operational Permissive Mode: Not Supported 00:32:22.872 NVM Sets: Not Supported 00:32:22.872 Read Recovery Levels: Not Supported 00:32:22.872 Endurance Groups: Not Supported 00:32:22.872 Predictable Latency Mode: Not Supported 00:32:22.872 Traffic Based Keep ALive: Supported 00:32:22.872 Namespace Granularity: Not Supported 00:32:22.872 SQ Associations: Not Supported 00:32:22.872 UUID List: Not Supported 00:32:22.872 Multi-Domain Subsystem: Not Supported 00:32:22.872 Fixed Capacity Management: Not Supported 00:32:22.872 Variable Capacity Management: Not Supported 00:32:22.872 Delete Endurance Group: Not Supported 00:32:22.872 Delete NVM Set: Not Supported 00:32:22.872 Extended LBA Formats Supported: Not Supported 00:32:22.872 Flexible Data Placement Supported: Not Supported 00:32:22.872 00:32:22.872 Controller Memory Buffer Support 00:32:22.872 ================================ 00:32:22.872 Supported: No 00:32:22.872 00:32:22.872 Persistent Memory Region Support 00:32:22.872 ================================ 00:32:22.872 Supported: No 00:32:22.872 00:32:22.872 Admin Command Set Attributes 00:32:22.872 ============================ 00:32:22.872 Security Send/Receive: Not Supported 00:32:22.872 Format NVM: Not Supported 00:32:22.872 Firmware Activate/Download: Not Supported 00:32:22.872 Namespace Management: Not Supported 00:32:22.872 Device Self-Test: Not Supported 00:32:22.872 Directives: Not Supported 00:32:22.872 NVMe-MI: Not Supported 00:32:22.872 Virtualization Management: Not Supported 00:32:22.872 Doorbell Buffer Config: Not Supported 00:32:22.872 Get LBA Status Capability: Not Supported 00:32:22.872 Command & Feature Lockdown Capability: Not Supported 00:32:22.872 Abort Command Limit: 4 00:32:22.872 Async Event Request Limit: 4 00:32:22.872 Number of Firmware Slots: N/A 00:32:22.872 Firmware Slot 1 Read-Only: N/A 00:32:22.872 Firmware Activation Without Reset: N/A 00:32:22.872 Multiple Update Detection Support: N/A 00:32:22.872 Firmware Update Granularity: No Information Provided 00:32:22.872 Per-Namespace SMART Log: Yes 00:32:22.872 Asymmetric Namespace Access Log Page: Supported 00:32:22.872 ANA Transition Time : 10 sec 00:32:22.872 00:32:22.872 Asymmetric Namespace Access Capabilities 00:32:22.872 ANA Optimized State : Supported 00:32:22.872 ANA Non-Optimized State : Supported 00:32:22.872 ANA Inaccessible State : Supported 00:32:22.872 ANA Persistent Loss State : Supported 00:32:22.872 ANA Change State : Supported 00:32:22.872 ANAGRPID is not changed : No 00:32:22.873 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:22.873 00:32:22.873 ANA Group Identifier Maximum : 128 00:32:22.873 Number of ANA Group Identifiers : 128 00:32:22.873 Max Number of Allowed Namespaces : 1024 00:32:22.873 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:22.873 Command Effects Log Page: Supported 00:32:22.873 Get Log Page Extended Data: Supported 00:32:22.873 Telemetry Log Pages: Not Supported 00:32:22.873 Persistent Event Log Pages: Not Supported 00:32:22.873 Supported Log Pages Log Page: May Support 00:32:22.873 Commands Supported & Effects Log Page: Not Supported 00:32:22.873 Feature Identifiers & Effects Log Page:May Support 00:32:22.873 NVMe-MI Commands & Effects Log Page: May Support 00:32:22.873 Data Area 4 for Telemetry Log: Not Supported 00:32:22.873 Error Log Page Entries Supported: 128 00:32:22.873 Keep Alive: Supported 00:32:22.873 Keep Alive Granularity: 1000 ms 00:32:22.873 00:32:22.873 NVM Command Set Attributes 00:32:22.873 ========================== 00:32:22.873 Submission Queue Entry Size 00:32:22.873 Max: 64 00:32:22.873 Min: 64 00:32:22.873 Completion Queue Entry Size 00:32:22.873 Max: 16 00:32:22.873 Min: 16 00:32:22.873 Number of Namespaces: 1024 00:32:22.873 Compare Command: Not Supported 00:32:22.873 Write Uncorrectable Command: Not Supported 00:32:22.873 Dataset Management Command: Supported 00:32:22.873 Write Zeroes Command: Supported 00:32:22.873 Set Features Save Field: Not Supported 00:32:22.873 Reservations: Not Supported 00:32:22.873 Timestamp: Not Supported 00:32:22.873 Copy: Not Supported 00:32:22.873 Volatile Write Cache: Present 00:32:22.873 Atomic Write Unit (Normal): 1 00:32:22.873 Atomic Write Unit (PFail): 1 00:32:22.873 Atomic Compare & Write Unit: 1 00:32:22.873 Fused Compare & Write: Not Supported 00:32:22.873 Scatter-Gather List 00:32:22.873 SGL Command Set: Supported 00:32:22.873 SGL Keyed: Not Supported 00:32:22.873 SGL Bit Bucket Descriptor: Not Supported 00:32:22.873 SGL Metadata Pointer: Not Supported 00:32:22.873 Oversized SGL: Not Supported 00:32:22.873 SGL Metadata Address: Not Supported 00:32:22.873 SGL Offset: Supported 00:32:22.873 Transport SGL Data Block: Not Supported 00:32:22.873 Replay Protected Memory Block: Not Supported 00:32:22.873 00:32:22.873 Firmware Slot Information 00:32:22.873 ========================= 00:32:22.873 Active slot: 0 00:32:22.873 00:32:22.873 Asymmetric Namespace Access 00:32:22.873 =========================== 00:32:22.873 Change Count : 0 00:32:22.873 Number of ANA Group Descriptors : 1 00:32:22.873 ANA Group Descriptor : 0 00:32:22.873 ANA Group ID : 1 00:32:22.873 Number of NSID Values : 1 00:32:22.873 Change Count : 0 00:32:22.873 ANA State : 1 00:32:22.873 Namespace Identifier : 1 00:32:22.873 00:32:22.873 Commands Supported and Effects 00:32:22.873 ============================== 00:32:22.873 Admin Commands 00:32:22.873 -------------- 00:32:22.873 Get Log Page (02h): Supported 00:32:22.873 Identify (06h): Supported 00:32:22.873 Abort (08h): Supported 00:32:22.873 Set Features (09h): Supported 00:32:22.873 Get Features (0Ah): Supported 00:32:22.873 Asynchronous Event Request (0Ch): Supported 00:32:22.873 Keep Alive (18h): Supported 00:32:22.873 I/O Commands 00:32:22.873 ------------ 00:32:22.873 Flush (00h): Supported 00:32:22.873 Write (01h): Supported LBA-Change 00:32:22.873 Read (02h): Supported 00:32:22.873 Write Zeroes (08h): Supported LBA-Change 00:32:22.873 Dataset Management (09h): Supported 00:32:22.873 00:32:22.873 Error Log 00:32:22.873 ========= 00:32:22.873 Entry: 0 00:32:22.873 Error Count: 0x3 00:32:22.873 Submission Queue Id: 0x0 00:32:22.873 Command Id: 0x5 00:32:22.873 Phase Bit: 0 00:32:22.873 Status Code: 0x2 00:32:22.873 Status Code Type: 0x0 00:32:22.873 Do Not Retry: 1 00:32:22.873 Error Location: 0x28 00:32:22.873 LBA: 0x0 00:32:22.873 Namespace: 0x0 00:32:22.873 Vendor Log Page: 0x0 00:32:22.873 ----------- 00:32:22.873 Entry: 1 00:32:22.873 Error Count: 0x2 00:32:22.873 Submission Queue Id: 0x0 00:32:22.873 Command Id: 0x5 00:32:22.873 Phase Bit: 0 00:32:22.873 Status Code: 0x2 00:32:22.873 Status Code Type: 0x0 00:32:22.873 Do Not Retry: 1 00:32:22.873 Error Location: 0x28 00:32:22.873 LBA: 0x0 00:32:22.873 Namespace: 0x0 00:32:22.873 Vendor Log Page: 0x0 00:32:22.873 ----------- 00:32:22.873 Entry: 2 00:32:22.873 Error Count: 0x1 00:32:22.873 Submission Queue Id: 0x0 00:32:22.873 Command Id: 0x4 00:32:22.873 Phase Bit: 0 00:32:22.873 Status Code: 0x2 00:32:22.873 Status Code Type: 0x0 00:32:22.873 Do Not Retry: 1 00:32:22.873 Error Location: 0x28 00:32:22.873 LBA: 0x0 00:32:22.873 Namespace: 0x0 00:32:22.873 Vendor Log Page: 0x0 00:32:22.873 00:32:22.873 Number of Queues 00:32:22.873 ================ 00:32:22.873 Number of I/O Submission Queues: 128 00:32:22.873 Number of I/O Completion Queues: 128 00:32:22.873 00:32:22.873 ZNS Specific Controller Data 00:32:22.873 ============================ 00:32:22.873 Zone Append Size Limit: 0 00:32:22.873 00:32:22.873 00:32:22.873 Active Namespaces 00:32:22.873 ================= 00:32:22.873 get_feature(0x05) failed 00:32:22.873 Namespace ID:1 00:32:22.873 Command Set Identifier: NVM (00h) 00:32:22.873 Deallocate: Supported 00:32:22.873 Deallocated/Unwritten Error: Not Supported 00:32:22.873 Deallocated Read Value: Unknown 00:32:22.873 Deallocate in Write Zeroes: Not Supported 00:32:22.873 Deallocated Guard Field: 0xFFFF 00:32:22.873 Flush: Supported 00:32:22.873 Reservation: Not Supported 00:32:22.873 Namespace Sharing Capabilities: Multiple Controllers 00:32:22.873 Size (in LBAs): 1953525168 (931GiB) 00:32:22.873 Capacity (in LBAs): 1953525168 (931GiB) 00:32:22.873 Utilization (in LBAs): 1953525168 (931GiB) 00:32:22.873 UUID: ea80638d-e730-4321-a41d-b45f04e80a5e 00:32:22.873 Thin Provisioning: Not Supported 00:32:22.873 Per-NS Atomic Units: Yes 00:32:22.873 Atomic Boundary Size (Normal): 0 00:32:22.873 Atomic Boundary Size (PFail): 0 00:32:22.873 Atomic Boundary Offset: 0 00:32:22.873 NGUID/EUI64 Never Reused: No 00:32:22.873 ANA group ID: 1 00:32:22.873 Namespace Write Protected: No 00:32:22.873 Number of LBA Formats: 1 00:32:22.873 Current LBA Format: LBA Format #00 00:32:22.873 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:22.873 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:22.873 rmmod nvme_tcp 00:32:22.873 rmmod nvme_fabrics 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:22.873 20:02:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:25.401 20:02:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:26.340 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:26.340 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:26.340 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:27.274 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:27.274 00:32:27.274 real 0m9.217s 00:32:27.274 user 0m1.898s 00:32:27.274 sys 0m3.340s 00:32:27.274 20:02:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:27.274 20:02:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.274 ************************************ 00:32:27.274 END TEST nvmf_identify_kernel_target 00:32:27.274 ************************************ 00:32:27.274 20:02:36 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:27.274 20:02:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:27.274 20:02:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:27.274 20:02:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.274 ************************************ 00:32:27.274 START TEST nvmf_auth_host 00:32:27.274 ************************************ 00:32:27.274 20:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:27.533 * Looking for test storage... 00:32:27.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.533 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.534 20:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.440 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.440 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.441 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:32:29.441 00:32:29.441 --- 10.0.0.2 ping statistics --- 00:32:29.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.441 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:32:29.441 00:32:29.441 --- 10.0.0.1 ping statistics --- 00:32:29.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.441 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=4114437 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 4114437 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 4114437 ']' 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:29.441 20:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f0b192c294079585281cd4fc6fa579cb 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DaC 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f0b192c294079585281cd4fc6fa579cb 0 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f0b192c294079585281cd4fc6fa579cb 0 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f0b192c294079585281cd4fc6fa579cb 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:29.698 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DaC 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DaC 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DaC 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fbea4a57c537438009ba63d7595e3443946e35afe9eb12a50b8f3fb42060780f 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.X9L 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fbea4a57c537438009ba63d7595e3443946e35afe9eb12a50b8f3fb42060780f 3 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fbea4a57c537438009ba63d7595e3443946e35afe9eb12a50b8f3fb42060780f 3 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fbea4a57c537438009ba63d7595e3443946e35afe9eb12a50b8f3fb42060780f 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.X9L 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.X9L 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.X9L 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:29.961 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3ef4969c30e58f2dd0025cdeb741c1970ae60bd4b9d6c76 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EyS 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3ef4969c30e58f2dd0025cdeb741c1970ae60bd4b9d6c76 0 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3ef4969c30e58f2dd0025cdeb741c1970ae60bd4b9d6c76 0 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3ef4969c30e58f2dd0025cdeb741c1970ae60bd4b9d6c76 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EyS 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EyS 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EyS 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e68b7b077bfd13475272699aac4598b7c399e75740c478b0 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tZv 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e68b7b077bfd13475272699aac4598b7c399e75740c478b0 2 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e68b7b077bfd13475272699aac4598b7c399e75740c478b0 2 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e68b7b077bfd13475272699aac4598b7c399e75740c478b0 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tZv 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tZv 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tZv 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=55c53ddce31737e231ef04dc94fac9b9 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UX5 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 55c53ddce31737e231ef04dc94fac9b9 1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 55c53ddce31737e231ef04dc94fac9b9 1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=55c53ddce31737e231ef04dc94fac9b9 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UX5 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UX5 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UX5 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2a708c73c5cb1776e514cadc614d574e 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QWn 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2a708c73c5cb1776e514cadc614d574e 1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2a708c73c5cb1776e514cadc614d574e 1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2a708c73c5cb1776e514cadc614d574e 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:29.962 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QWn 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QWn 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.QWn 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cab576288d424c7994c7b4b79a413eed8980e25a28bd81dc 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oIr 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cab576288d424c7994c7b4b79a413eed8980e25a28bd81dc 2 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cab576288d424c7994c7b4b79a413eed8980e25a28bd81dc 2 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.264 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cab576288d424c7994c7b4b79a413eed8980e25a28bd81dc 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oIr 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oIr 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oIr 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=04c8f6c94fc0b0a4f1d755b099f55c92 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4nD 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 04c8f6c94fc0b0a4f1d755b099f55c92 0 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 04c8f6c94fc0b0a4f1d755b099f55c92 0 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=04c8f6c94fc0b0a4f1d755b099f55c92 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4nD 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4nD 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4nD 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=939f92857f794a6db4f964b2ba46bcbb7387f6bfd17a34c8435d94480674ad8f 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Bkz 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 939f92857f794a6db4f964b2ba46bcbb7387f6bfd17a34c8435d94480674ad8f 3 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 939f92857f794a6db4f964b2ba46bcbb7387f6bfd17a34c8435d94480674ad8f 3 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=939f92857f794a6db4f964b2ba46bcbb7387f6bfd17a34c8435d94480674ad8f 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Bkz 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Bkz 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Bkz 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4114437 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 4114437 ']' 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:30.265 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DaC 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.X9L ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.X9L 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EyS 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tZv ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tZv 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UX5 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.QWn ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QWn 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oIr 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4nD ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4nD 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.524 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.525 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Bkz 00:32:30.525 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.525 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:30.783 20:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:32.158 Waiting for block devices as requested 00:32:32.158 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:32.158 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:32.158 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:32.158 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:32.417 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:32.417 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:32.417 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:32.677 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:32.677 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:32.677 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:32.677 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:32.935 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:32.935 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:32.935 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:32.935 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:33.193 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:33.193 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:33.760 No valid GPT data, bailing 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:33.760 20:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:33.760 00:32:33.760 Discovery Log Number of Records 2, Generation counter 2 00:32:33.760 =====Discovery Log Entry 0====== 00:32:33.760 trtype: tcp 00:32:33.760 adrfam: ipv4 00:32:33.760 subtype: current discovery subsystem 00:32:33.760 treq: not specified, sq flow control disable supported 00:32:33.760 portid: 1 00:32:33.760 trsvcid: 4420 00:32:33.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:33.760 traddr: 10.0.0.1 00:32:33.760 eflags: none 00:32:33.760 sectype: none 00:32:33.760 =====Discovery Log Entry 1====== 00:32:33.760 trtype: tcp 00:32:33.760 adrfam: ipv4 00:32:33.760 subtype: nvme subsystem 00:32:33.760 treq: not specified, sq flow control disable supported 00:32:33.760 portid: 1 00:32:33.760 trsvcid: 4420 00:32:33.760 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:33.760 traddr: 10.0.0.1 00:32:33.760 eflags: none 00:32:33.760 sectype: none 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.760 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.761 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 nvme0n1 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 nvme0n1 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.278 nvme0n1 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.278 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.279 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.537 nvme0n1 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.537 20:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.797 nvme0n1 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.797 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.798 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.798 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.798 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.798 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.056 nvme0n1 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.056 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.313 nvme0n1 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.313 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.314 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.571 nvme0n1 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.571 20:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.828 nvme0n1 00:32:35.828 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.828 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.829 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.088 nvme0n1 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.088 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.347 nvme0n1 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.347 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.607 nvme0n1 00:32:36.607 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.607 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.607 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.607 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.607 20:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.607 20:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.607 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.608 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.608 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.608 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.868 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.129 nvme0n1 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.129 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.388 nvme0n1 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.388 20:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.648 nvme0n1 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.648 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.908 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.909 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.169 nvme0n1 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.169 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.739 nvme0n1 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.739 20:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.739 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.308 nvme0n1 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.308 20:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.874 nvme0n1 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.874 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.440 nvme0n1 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.440 20:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.010 nvme0n1 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.010 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.269 20:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.270 20:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.270 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.270 20:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.208 nvme0n1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.208 20:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.145 nvme0n1 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.145 20:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.083 nvme0n1 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.083 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.084 20:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.021 nvme0n1 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.021 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.022 20:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 nvme0n1 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 nvme0n1 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.436 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.437 nvme0n1 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.437 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.695 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.696 20:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.696 nvme0n1 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.696 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.956 nvme0n1 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.956 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.957 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.217 nvme0n1 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.217 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.218 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 nvme0n1 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.478 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.479 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.479 20:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.479 20:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.479 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.479 20:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.738 nvme0n1 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.738 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.997 nvme0n1 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.997 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.255 nvme0n1 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.255 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.256 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.514 nvme0n1 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.514 20:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.772 nvme0n1 00:32:48.772 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.772 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.772 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.772 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.772 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.772 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.031 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.032 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.292 nvme0n1 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.292 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.552 nvme0n1 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.552 20:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.121 nvme0n1 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.121 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.122 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.382 nvme0n1 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.382 20:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.949 nvme0n1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.949 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.514 nvme0n1 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.514 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.515 20:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.082 nvme0n1 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.082 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.083 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.650 nvme0n1 00:32:52.650 20:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.650 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.215 nvme0n1 00:32:53.215 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.215 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.215 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.215 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.215 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.215 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.473 20:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.411 nvme0n1 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.411 20:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.412 20:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.412 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.412 20:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.350 nvme0n1 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.350 20:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.286 nvme0n1 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.286 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.544 20:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.480 nvme0n1 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.480 20:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.414 nvme0n1 00:32:58.414 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.414 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 nvme0n1 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 20:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.674 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 nvme0n1 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 nvme0n1 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 nvme0n1 00:32:59.191 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.449 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.450 nvme0n1 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.450 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.708 20:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.708 nvme0n1 00:32:59.708 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.708 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.708 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.708 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.708 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.708 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.966 nvme0n1 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.966 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.224 nvme0n1 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.224 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.483 nvme0n1 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.483 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.741 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.741 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.741 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.741 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.741 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.741 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.742 20:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.742 nvme0n1 00:33:00.742 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.742 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.742 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.742 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.742 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.742 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.002 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.263 nvme0n1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.263 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.522 nvme0n1 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.522 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.523 20:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.781 nvme0n1 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.781 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.044 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.325 nvme0n1 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.325 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.326 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.588 nvme0n1 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.588 20:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.154 nvme0n1 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.154 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.155 20:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.720 nvme0n1 00:33:03.720 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.720 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.720 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.720 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.720 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.720 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.980 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.981 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.547 nvme0n1 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.547 20:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.112 nvme0n1 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.112 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.678 nvme0n1 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.678 20:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBiMTkyYzI5NDA3OTU4NTI4MWNkNGZjNmZhNTc5Y2L5ezEu: 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: ]] 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJlYTRhNTdjNTM3NDM4MDA5YmE2M2Q3NTk1ZTM0NDM5NDZlMzVhZmU5ZWIxMmE1MGI4ZjNmYjQyMDYwNzgwZlKG8TM=: 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.678 20:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.616 nvme0n1 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.616 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.876 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.815 nvme0n1 00:33:07.815 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.815 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.815 20:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.815 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.815 20:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVjNTNkZGNlMzE3MzdlMjMxZWYwNGRjOTRmYWM5YjnRxOw6: 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: ]] 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE3MDhjNzNjNWNiMTc3NmU1MTRjYWRjNjE0ZDU3NGWmzbEx: 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.815 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.816 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.754 nvme0n1 00:33:08.754 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.754 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.754 20:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.754 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.754 20:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2FiNTc2Mjg4ZDQyNGM3OTk0YzdiNGI3OWE0MTNlZWQ4OTgwZTI1YTI4YmQ4MWRjmnVRcA==: 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRjOGY2Yzk0ZmMwYjBhNGYxZDc1NWIwOTlmNTVjOTJdTmzF: 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.754 20:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.690 nvme0n1 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTM5ZjkyODU3Zjc5NGE2ZGI0Zjk2NGIyYmE0NmJjYmI3Mzg3ZjZiZmQxN2EzNGM4NDM1ZDk0NDgwNjc0YWQ4Zs2/UFs=: 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.690 20:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.068 nvme0n1 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNlZjQ5NjljMzBlNThmMmRkMDAyNWNkZWI3NDFjMTk3MGFlNjBiZDRiOWQ2Yzc2jq8ojA==: 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY4YjdiMDc3YmZkMTM0NzUyNzI2OTlhYWM0NTk4YjdjMzk5ZTc1NzQwYzQ3OGIwDntnQw==: 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.068 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.069 request: 00:33:11.069 { 00:33:11.069 "name": "nvme0", 00:33:11.069 "trtype": "tcp", 00:33:11.069 "traddr": "10.0.0.1", 00:33:11.069 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.069 "adrfam": "ipv4", 00:33:11.069 "trsvcid": "4420", 00:33:11.069 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.069 "method": "bdev_nvme_attach_controller", 00:33:11.069 "req_id": 1 00:33:11.069 } 00:33:11.069 Got JSON-RPC error response 00:33:11.069 response: 00:33:11.069 { 00:33:11.069 "code": -5, 00:33:11.069 "message": "Input/output error" 00:33:11.069 } 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.069 request: 00:33:11.069 { 00:33:11.069 "name": "nvme0", 00:33:11.069 "trtype": "tcp", 00:33:11.069 "traddr": "10.0.0.1", 00:33:11.069 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.069 "adrfam": "ipv4", 00:33:11.069 "trsvcid": "4420", 00:33:11.069 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.069 "dhchap_key": "key2", 00:33:11.069 "method": "bdev_nvme_attach_controller", 00:33:11.069 "req_id": 1 00:33:11.069 } 00:33:11.069 Got JSON-RPC error response 00:33:11.069 response: 00:33:11.069 { 00:33:11.069 "code": -5, 00:33:11.069 "message": "Input/output error" 00:33:11.069 } 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.069 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.327 request: 00:33:11.327 { 00:33:11.327 "name": "nvme0", 00:33:11.327 "trtype": "tcp", 00:33:11.327 "traddr": "10.0.0.1", 00:33:11.327 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.327 "adrfam": "ipv4", 00:33:11.327 "trsvcid": "4420", 00:33:11.327 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.327 "dhchap_key": "key1", 00:33:11.327 "dhchap_ctrlr_key": "ckey2", 00:33:11.327 "method": "bdev_nvme_attach_controller", 00:33:11.327 "req_id": 1 00:33:11.327 } 00:33:11.327 Got JSON-RPC error response 00:33:11.327 response: 00:33:11.327 { 00:33:11.327 "code": -5, 00:33:11.327 "message": "Input/output error" 00:33:11.327 } 00:33:11.327 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.327 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:11.327 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:11.328 rmmod nvme_tcp 00:33:11.328 rmmod nvme_fabrics 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 4114437 ']' 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 4114437 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 4114437 ']' 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 4114437 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4114437 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4114437' 00:33:11.328 killing process with pid 4114437 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 4114437 00:33:11.328 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 4114437 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.597 20:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:13.495 20:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.867 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:14.867 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:14.867 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:15.802 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:15.802 20:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DaC /tmp/spdk.key-null.EyS /tmp/spdk.key-sha256.UX5 /tmp/spdk.key-sha384.oIr /tmp/spdk.key-sha512.Bkz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:15.802 20:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.177 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:17.177 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:17.177 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:17.178 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:17.178 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:17.178 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:17.178 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:17.178 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:17.178 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:17.178 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:17.178 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:17.178 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:17.178 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:17.178 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:17.178 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:17.178 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:17.178 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:17.178 00:33:17.178 real 0m49.791s 00:33:17.178 user 0m47.601s 00:33:17.178 sys 0m5.820s 00:33:17.178 20:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:17.178 20:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 ************************************ 00:33:17.178 END TEST nvmf_auth_host 00:33:17.178 ************************************ 00:33:17.178 20:03:26 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:17.178 20:03:26 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:17.178 20:03:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:17.178 20:03:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:17.178 20:03:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 ************************************ 00:33:17.178 START TEST nvmf_digest 00:33:17.178 ************************************ 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:17.178 * Looking for test storage... 00:33:17.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:17.178 20:03:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.078 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:19.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:19.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:19.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:19.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.079 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.337 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.337 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:19.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:33:19.338 00:33:19.338 --- 10.0.0.2 ping statistics --- 00:33:19.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.338 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:19.338 00:33:19.338 --- 10.0.0.1 ping statistics --- 00:33:19.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.338 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.338 ************************************ 00:33:19.338 START TEST nvmf_digest_clean 00:33:19.338 ************************************ 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=4124595 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 4124595 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4124595 ']' 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:19.338 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.338 [2024-07-25 20:03:28.688240] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:19.338 [2024-07-25 20:03:28.688329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.338 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.338 [2024-07-25 20:03:28.760987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.596 [2024-07-25 20:03:28.855144] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.596 [2024-07-25 20:03:28.855205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.597 [2024-07-25 20:03:28.855234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.597 [2024-07-25 20:03:28.855245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.597 [2024-07-25 20:03:28.855256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.597 [2024-07-25 20:03:28.855283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.597 20:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.855 null0 00:33:19.855 [2024-07-25 20:03:29.063856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.855 [2024-07-25 20:03:29.088087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4124615 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4124615 /var/tmp/bperf.sock 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4124615 ']' 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:19.855 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.855 [2024-07-25 20:03:29.138184] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:19.855 [2024-07-25 20:03:29.138256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124615 ] 00:33:19.855 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.855 [2024-07-25 20:03:29.205887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.113 [2024-07-25 20:03:29.298630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.113 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.113 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:20.113 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:20.113 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:20.113 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:20.372 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.372 20:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.671 nvme0n1 00:33:20.671 20:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:20.671 20:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.930 Running I/O for 2 seconds... 00:33:22.827 00:33:22.827 Latency(us) 00:33:22.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.827 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:22.827 nvme0n1 : 2.04 18185.86 71.04 0.00 0.00 6895.16 3276.80 45049.93 00:33:22.827 =================================================================================================================== 00:33:22.827 Total : 18185.86 71.04 0.00 0.00 6895.16 3276.80 45049.93 00:33:22.827 0 00:33:22.827 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:22.827 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:22.827 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:22.827 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:22.827 | select(.opcode=="crc32c") 00:33:22.827 | "\(.module_name) \(.executed)"' 00:33:22.827 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4124615 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4124615 ']' 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4124615 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:23.085 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:23.086 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4124615 00:33:23.086 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:23.086 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:23.086 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4124615' 00:33:23.086 killing process with pid 4124615 00:33:23.086 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4124615 00:33:23.086 Received shutdown signal, test time was about 2.000000 seconds 00:33:23.086 00:33:23.086 Latency(us) 00:33:23.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.086 =================================================================================================================== 00:33:23.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:23.086 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4124615 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4125026 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4125026 /var/tmp/bperf.sock 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4125026 ']' 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:23.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:23.344 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:23.344 [2024-07-25 20:03:32.754938] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:23.344 [2024-07-25 20:03:32.755031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125026 ] 00:33:23.344 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.344 Zero copy mechanism will not be used. 00:33:23.602 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.602 [2024-07-25 20:03:32.817895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.603 [2024-07-25 20:03:32.907985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.603 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:23.603 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:23.603 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:23.603 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:23.603 20:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:24.168 20:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.168 20:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.424 nvme0n1 00:33:24.424 20:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:24.424 20:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:24.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:24.424 Zero copy mechanism will not be used. 00:33:24.424 Running I/O for 2 seconds... 00:33:26.945 00:33:26.945 Latency(us) 00:33:26.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.945 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:26.945 nvme0n1 : 2.00 4890.16 611.27 0.00 0.00 3267.44 743.35 9660.49 00:33:26.945 =================================================================================================================== 00:33:26.945 Total : 4890.16 611.27 0.00 0.00 3267.44 743.35 9660.49 00:33:26.945 0 00:33:26.945 20:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:26.945 20:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:26.945 20:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:26.945 20:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:26.945 20:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:26.945 | select(.opcode=="crc32c") 00:33:26.945 | "\(.module_name) \(.executed)"' 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4125026 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4125026 ']' 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4125026 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4125026 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4125026' 00:33:26.945 killing process with pid 4125026 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4125026 00:33:26.945 Received shutdown signal, test time was about 2.000000 seconds 00:33:26.945 00:33:26.945 Latency(us) 00:33:26.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.945 =================================================================================================================== 00:33:26.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4125026 00:33:26.945 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4125430 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4125430 /var/tmp/bperf.sock 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4125430 ']' 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.946 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:26.946 [2024-07-25 20:03:36.372229] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:26.946 [2024-07-25 20:03:36.372314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125430 ] 00:33:27.203 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.203 [2024-07-25 20:03:36.436264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.203 [2024-07-25 20:03:36.524596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.203 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.203 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:27.203 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:27.203 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:27.204 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:27.768 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.768 20:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.024 nvme0n1 00:33:28.024 20:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:28.024 20:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:28.024 Running I/O for 2 seconds... 00:33:30.549 00:33:30.549 Latency(us) 00:33:30.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.549 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.549 nvme0n1 : 2.00 21619.77 84.45 0.00 0.00 5910.73 3228.25 12718.84 00:33:30.549 =================================================================================================================== 00:33:30.549 Total : 21619.77 84.45 0.00 0.00 5910.73 3228.25 12718.84 00:33:30.549 0 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:30.549 | select(.opcode=="crc32c") 00:33:30.549 | "\(.module_name) \(.executed)"' 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4125430 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4125430 ']' 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4125430 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4125430 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4125430' 00:33:30.549 killing process with pid 4125430 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4125430 00:33:30.549 Received shutdown signal, test time was about 2.000000 seconds 00:33:30.549 00:33:30.549 Latency(us) 00:33:30.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.549 =================================================================================================================== 00:33:30.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4125430 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4125882 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4125882 /var/tmp/bperf.sock 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4125882 ']' 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:30.549 20:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.549 [2024-07-25 20:03:39.957571] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:30.549 [2024-07-25 20:03:39.957646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125882 ] 00:33:30.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.549 Zero copy mechanism will not be used. 00:33:30.807 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.807 [2024-07-25 20:03:40.020930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.807 [2024-07-25 20:03:40.114012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.807 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:30.807 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:30.807 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:30.807 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:30.807 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:31.372 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.372 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.629 nvme0n1 00:33:31.629 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:31.629 20:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.886 Zero copy mechanism will not be used. 00:33:31.886 Running I/O for 2 seconds... 00:33:33.783 00:33:33.783 Latency(us) 00:33:33.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.783 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:33.783 nvme0n1 : 2.00 5304.49 663.06 0.00 0.00 3008.43 2026.76 9563.40 00:33:33.783 =================================================================================================================== 00:33:33.783 Total : 5304.49 663.06 0.00 0.00 3008.43 2026.76 9563.40 00:33:33.783 0 00:33:33.783 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:33.783 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:33.783 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:33.783 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:33.783 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:33.783 | select(.opcode=="crc32c") 00:33:33.783 | "\(.module_name) \(.executed)"' 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4125882 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4125882 ']' 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4125882 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4125882 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4125882' 00:33:34.040 killing process with pid 4125882 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4125882 00:33:34.040 Received shutdown signal, test time was about 2.000000 seconds 00:33:34.040 00:33:34.040 Latency(us) 00:33:34.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.040 =================================================================================================================== 00:33:34.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.040 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4125882 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4124595 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4124595 ']' 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4124595 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4124595 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4124595' 00:33:34.297 killing process with pid 4124595 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4124595 00:33:34.297 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4124595 00:33:34.556 00:33:34.556 real 0m15.234s 00:33:34.556 user 0m29.661s 00:33:34.556 sys 0m4.411s 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.556 ************************************ 00:33:34.556 END TEST nvmf_digest_clean 00:33:34.556 ************************************ 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:34.556 ************************************ 00:33:34.556 START TEST nvmf_digest_error 00:33:34.556 ************************************ 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=4126393 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 4126393 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4126393 ']' 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:34.556 20:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.556 [2024-07-25 20:03:43.980870] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:34.557 [2024-07-25 20:03:43.980948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.815 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.815 [2024-07-25 20:03:44.048996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.815 [2024-07-25 20:03:44.135875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.815 [2024-07-25 20:03:44.135940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.815 [2024-07-25 20:03:44.135957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.815 [2024-07-25 20:03:44.135970] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.815 [2024-07-25 20:03:44.135982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.815 [2024-07-25 20:03:44.136014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.816 [2024-07-25 20:03:44.208615] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.816 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.074 null0 00:33:35.074 [2024-07-25 20:03:44.326765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.074 [2024-07-25 20:03:44.351009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.074 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.074 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:35.074 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:35.074 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:35.074 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:35.074 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4126412 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4126412 /var/tmp/bperf.sock 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4126412 ']' 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:35.075 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.075 [2024-07-25 20:03:44.400467] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:35.075 [2024-07-25 20:03:44.400545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126412 ] 00:33:35.075 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.075 [2024-07-25 20:03:44.470918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.333 [2024-07-25 20:03:44.566392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.333 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:35.333 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:35.333 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.333 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.591 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:35.591 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.591 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.591 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.591 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.591 20:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.849 nvme0n1 00:33:35.849 20:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:35.849 20:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.849 20:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.107 20:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.107 20:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.107 20:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:36.107 Running I/O for 2 seconds... 00:33:36.107 [2024-07-25 20:03:45.398844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.398892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.415616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.415653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.415681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.431126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.431160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.431180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.443495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.443529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.443549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.456054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.456096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.456116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.471067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.471101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.471121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.484713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.484757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.484777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.497755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.497789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.497809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.510109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.510143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.510162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.107 [2024-07-25 20:03:45.525221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.107 [2024-07-25 20:03:45.525256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.107 [2024-07-25 20:03:45.525275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.537185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.537228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.537249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.552008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.552043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.552074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.565927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.565961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.565981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.579233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.579267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.579287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.591534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.591569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.591589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.607124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.607159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.607179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.619922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.619956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.636457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.636502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.636521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.648860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.648895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.648915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.665392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.665426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.665446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.682608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.682646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.682667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.694419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.694453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.694472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.711125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.711158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-07-25 20:03:45.711178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-07-25 20:03:45.726910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.366 [2024-07-25 20:03:45.726944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-07-25 20:03:45.726964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-07-25 20:03:45.738919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.367 [2024-07-25 20:03:45.738952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-07-25 20:03:45.738972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-07-25 20:03:45.756367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.367 [2024-07-25 20:03:45.756401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-07-25 20:03:45.756420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-07-25 20:03:45.767281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.367 [2024-07-25 20:03:45.767314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-07-25 20:03:45.767334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-07-25 20:03:45.784227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.367 [2024-07-25 20:03:45.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-07-25 20:03:45.784288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.799718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.799751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.799771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.811503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.811537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.811557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.826933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.826967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.826987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.838812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.838846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.838866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.854927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.854960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.854980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.867106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.867141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.867159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.884471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.884525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.895935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.895969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.895988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.912549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.912596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.912617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.928005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.928039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.928068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.939707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.939741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.939761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.955526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.955559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.955579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.625 [2024-07-25 20:03:45.968206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.625 [2024-07-25 20:03:45.968239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.625 [2024-07-25 20:03:45.968259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-07-25 20:03:45.984812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.626 [2024-07-25 20:03:45.984846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-07-25 20:03:45.984871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-07-25 20:03:46.000752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.626 [2024-07-25 20:03:46.000795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-07-25 20:03:46.000815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-07-25 20:03:46.012358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.626 [2024-07-25 20:03:46.012391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-07-25 20:03:46.012411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-07-25 20:03:46.029142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.626 [2024-07-25 20:03:46.029176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-07-25 20:03:46.029195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-07-25 20:03:46.047227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.626 [2024-07-25 20:03:46.047261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-07-25 20:03:46.047280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.884 [2024-07-25 20:03:46.061294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.884 [2024-07-25 20:03:46.061328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.884 [2024-07-25 20:03:46.061347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.884 [2024-07-25 20:03:46.073269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.884 [2024-07-25 20:03:46.073302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.884 [2024-07-25 20:03:46.073321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.090702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.090735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.090755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.105725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.105758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.105777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.118222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.118256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.118275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.131722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.131755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.131775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.143565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.143598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.143618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.159122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.159154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.159180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.176100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.176133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.176153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.187804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.187838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.187857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.203746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.203780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.203800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.219295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.219339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.219359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.231858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.231891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.231910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.249639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.249672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.249692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.266466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.266500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.266521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.284119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.284153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.284172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.296237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.296270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.885 [2024-07-25 20:03:46.312365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:36.885 [2024-07-25 20:03:46.312398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.885 [2024-07-25 20:03:46.312418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.325375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.325408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.325427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.341239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.341272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.341292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.357364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.357397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.357417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.369026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.369067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.369088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.383874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.383908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.383927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.401180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.401213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.401238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.416893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.416926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.416951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.428862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.428895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.428915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.444938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.444972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.457736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.457770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.457790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.472290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.472325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.472344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.485664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.485697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.485716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.497887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.497922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.497941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.515960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.515994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.516014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.527551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.527584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.527604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.544202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.544242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.544262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.560122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.560155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.560175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.144 [2024-07-25 20:03:46.572628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.144 [2024-07-25 20:03:46.572667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.144 [2024-07-25 20:03:46.572686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.588639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.588680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.588700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.600947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.600980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.600999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.618546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.618580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.618599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.634280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.634314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.634334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.645939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.645972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.645992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.663094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.663133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.663153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.679355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.679395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.679414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.691839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.691875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.691894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.708982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.709017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.709036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.725622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.725656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.725676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.737620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.737654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.737673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.753898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.753932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.753957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.766981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.767016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.767036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.782761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.782801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.782820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.798533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.798593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.810532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.810566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.810586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.403 [2024-07-25 20:03:46.826508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.403 [2024-07-25 20:03:46.826546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.403 [2024-07-25 20:03:46.826566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.838854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.838887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.838907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.856437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.856471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.856491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.868592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.868625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.868644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.883868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.883901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.883921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.901114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.901148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.901167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.917031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.917072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.917094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.929047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.929092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.929112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.945254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.945288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.945307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.957788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.957822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.957842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.972965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.973002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.973022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:46.986118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:46.986151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:46.986171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.001870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.001904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.001923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.017278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.017312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.017332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.029487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.029521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.029540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.045807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.045840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.045859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.058815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.058848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.058867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.074628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.074662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.074681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.662 [2024-07-25 20:03:47.087380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.662 [2024-07-25 20:03:47.087412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.662 [2024-07-25 20:03:47.087432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.104271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.104305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.104329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.119817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.119850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.119882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.131801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.131834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.131853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.148692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.148732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.148752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.161701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.161735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.161754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.176886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.176919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.176947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.189307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.189341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.189360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.205465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.205500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.205523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.221797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.221831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.221851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.234791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.234824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.234844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.250625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.250658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.250677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.262479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.262512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.262531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.278865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.278898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.278917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.294745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.294778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.294798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.307891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.921 [2024-07-25 20:03:47.307930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.921 [2024-07-25 20:03:47.307950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.921 [2024-07-25 20:03:47.322128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.922 [2024-07-25 20:03:47.322161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.922 [2024-07-25 20:03:47.322181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.922 [2024-07-25 20:03:47.335016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.922 [2024-07-25 20:03:47.335050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.922 [2024-07-25 20:03:47.335080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.922 [2024-07-25 20:03:47.348212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:37.922 [2024-07-25 20:03:47.348245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.922 [2024-07-25 20:03:47.348264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.180 [2024-07-25 20:03:47.360792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:38.180 [2024-07-25 20:03:47.360827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.180 [2024-07-25 20:03:47.360846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.180 [2024-07-25 20:03:47.375017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4c360) 00:33:38.180 [2024-07-25 20:03:47.375055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.180 [2024-07-25 20:03:47.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.180 00:33:38.180 Latency(us) 00:33:38.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.180 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:38.180 nvme0n1 : 2.00 17540.37 68.52 0.00 0.00 7289.32 3907.89 24175.50 00:33:38.180 =================================================================================================================== 00:33:38.180 Total : 17540.37 68.52 0.00 0.00 7289.32 3907.89 24175.50 00:33:38.180 0 00:33:38.180 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:38.180 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:38.180 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:38.180 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:38.180 | .driver_specific 00:33:38.180 | .nvme_error 00:33:38.180 | .status_code 00:33:38.180 | .command_transient_transport_error' 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4126412 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4126412 ']' 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4126412 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4126412 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4126412' 00:33:38.439 killing process with pid 4126412 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4126412 00:33:38.439 Received shutdown signal, test time was about 2.000000 seconds 00:33:38.439 00:33:38.439 Latency(us) 00:33:38.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.439 =================================================================================================================== 00:33:38.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.439 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4126412 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4126881 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4126881 /var/tmp/bperf.sock 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4126881 ']' 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:38.725 20:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.725 [2024-07-25 20:03:47.927565] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:38.726 [2024-07-25 20:03:47.927658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126881 ] 00:33:38.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.726 Zero copy mechanism will not be used. 00:33:38.726 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.726 [2024-07-25 20:03:47.988709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.726 [2024-07-25 20:03:48.078160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.984 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:38.984 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:38.984 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.984 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.241 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:39.241 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.241 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.241 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.241 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.241 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.499 nvme0n1 00:33:39.499 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:39.499 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.499 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.499 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.499 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:39.499 20:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:39.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.499 Zero copy mechanism will not be used. 00:33:39.499 Running I/O for 2 seconds... 00:33:39.499 [2024-07-25 20:03:48.903016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.499 [2024-07-25 20:03:48.903110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.500 [2024-07-25 20:03:48.903134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.500 [2024-07-25 20:03:48.910001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.500 [2024-07-25 20:03:48.910037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.500 [2024-07-25 20:03:48.910056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.500 [2024-07-25 20:03:48.916737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.500 [2024-07-25 20:03:48.916772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.500 [2024-07-25 20:03:48.916791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.500 [2024-07-25 20:03:48.923386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.500 [2024-07-25 20:03:48.923420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.500 [2024-07-25 20:03:48.923448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.929883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.929917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.758 [2024-07-25 20:03:48.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.936519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.936552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.758 [2024-07-25 20:03:48.936571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.944431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.944477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.758 [2024-07-25 20:03:48.944497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.952621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.952656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.758 [2024-07-25 20:03:48.952675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.960874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.960909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.758 [2024-07-25 20:03:48.960928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.970107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.970140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.758 [2024-07-25 20:03:48.970158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.758 [2024-07-25 20:03:48.978803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.758 [2024-07-25 20:03:48.978846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:48.978866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:48.987885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:48.987921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:48.987941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:48.996463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:48.996505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:48.996524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.005781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.005816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.005835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.015306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.015354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.015374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.024770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.024807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.034051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.034116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.034132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.043147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.043209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.051692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.051729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.051748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.059288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.059320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.059336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.066551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.066586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.073927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.073961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.073980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.081161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.081192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.081209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.087628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.087660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.087679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.094266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.094295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.094312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.100719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.100755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.100774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.107240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.107269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.107285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.113812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.113845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.113863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.120346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.120411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.126817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.126848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.126873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.133295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.133324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.133340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.139906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.139938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.139957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.146409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.146441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.146459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.152906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.152937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.152955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.159351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.759 [2024-07-25 20:03:49.159398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.759 [2024-07-25 20:03:49.159416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.759 [2024-07-25 20:03:49.166300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.760 [2024-07-25 20:03:49.166335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.760 [2024-07-25 20:03:49.166353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.760 [2024-07-25 20:03:49.172388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.760 [2024-07-25 20:03:49.172419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.760 [2024-07-25 20:03:49.172436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.760 [2024-07-25 20:03:49.178269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.760 [2024-07-25 20:03:49.178298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.760 [2024-07-25 20:03:49.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.760 [2024-07-25 20:03:49.184489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:39.760 [2024-07-25 20:03:49.184520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.760 [2024-07-25 20:03:49.184537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.190231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.190260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.190276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.196341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.196388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.196407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.202788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.202820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.202838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.209261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.209290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.209307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.215814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.215846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.215864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.222268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.222297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.222314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.228237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.228266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.228282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.234786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.234818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.234843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.241319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.241366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.241385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.247885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.247918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.247937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.254295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.254325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.254342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.260681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.260714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.260732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.267165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.267194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.267210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.273550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.273582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.273600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.279953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.279986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.280004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.286435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.286467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.286485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.292926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.292967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.292986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.299370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.299402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.299420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.305891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.305924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.305942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.312279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.312307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.312323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.318693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.318725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.318743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.325007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.325038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.325056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.331377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.331409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.331427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.337781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.019 [2024-07-25 20:03:49.337819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.019 [2024-07-25 20:03:49.337837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.019 [2024-07-25 20:03:49.344149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.344176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.344192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.350616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.350648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.350666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.357037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.357076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.357111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.363458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.363490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.363508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.370125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.370171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.376623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.376655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.376673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.383065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.383129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.389564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.389596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.389614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.396046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.396119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.402556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.402588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.408960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.408991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.409009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.415403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.415435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.415452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.422415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.422448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.422468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.428777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.428808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.428826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.435206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.435245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.435262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.020 [2024-07-25 20:03:49.441657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.020 [2024-07-25 20:03:49.441689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.020 [2024-07-25 20:03:49.441707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.448181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.448211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.448227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.454540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.454571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.454590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.461089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.461137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.461154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.467485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.467517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.467535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.473881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.473912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.473930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.480284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.480313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.480329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.486712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.486744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.486762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.493184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.493213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.493229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.499608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.499640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.499658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.506005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.506037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.512433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.512465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.518830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.518862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.525529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.525562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.525580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.532044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.532090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.279 [2024-07-25 20:03:49.532124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.279 [2024-07-25 20:03:49.538517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.279 [2024-07-25 20:03:49.538550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.538568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.544967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.544999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.545017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.551412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.551444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.551463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.557899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.557930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.557948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.564321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.564349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.564382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.570715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.570747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.570772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.577214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.577243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.577259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.583616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.583647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.583665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.589989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.590020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.590039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.596460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.596493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.596511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.602844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.602876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.602894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.609776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.609809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.609827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.617992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.618025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.618045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.626055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.626109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.626127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.633958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.633993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.634013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.640540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.640572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.640591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.646917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.646949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.646967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.653376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.653424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.659715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.659747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.659765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.666244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.666272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.666289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.673115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.673145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.673162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.679513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.679545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.679564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.685911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.685943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.685966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.692350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.692394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.692413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.698863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.698894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.698913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.280 [2024-07-25 20:03:49.705472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.280 [2024-07-25 20:03:49.705504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.280 [2024-07-25 20:03:49.705522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.711807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.711838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.711856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.718271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.718300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.718316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.724657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.724690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.724708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.731148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.731176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.731193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.737479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.737511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.737530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.744001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.744037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.744056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.750547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.750579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.750598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.756876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.756908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.756926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.763296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.763324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.763340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.769748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.769780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.538 [2024-07-25 20:03:49.769798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.538 [2024-07-25 20:03:49.776193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.538 [2024-07-25 20:03:49.776220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.776236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.782631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.782663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.782681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.789492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.789525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.789543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.795905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.795938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.795957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.802402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.802435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.802453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.808927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.808959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.808977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.815360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.815389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.815422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.821718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.821749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.821767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.828181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.828210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.828226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.834552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.834584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.834603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.840950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.840982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.841001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.847441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.847472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.847490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.853944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.853976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.853999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.860560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.860592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.860611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.867231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.867262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.867279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.873681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.873714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.873732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.880111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.880140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.880157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.886515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.886547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.886565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.892967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.892999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.893018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.899345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.899389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.899408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.905789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.905822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.905840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.912308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.912358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.912378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.918700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.918732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.918751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.925724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.925757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.925777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.932220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.932249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.932265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.938672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.938704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.938722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.945131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.945160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.945176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.951549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.951582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.951600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.959015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.959049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.959076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.539 [2024-07-25 20:03:49.963613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.539 [2024-07-25 20:03:49.963646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.539 [2024-07-25 20:03:49.963670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.797 [2024-07-25 20:03:49.971881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.797 [2024-07-25 20:03:49.971916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.797 [2024-07-25 20:03:49.971934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.797 [2024-07-25 20:03:49.980133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.797 [2024-07-25 20:03:49.980163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.797 [2024-07-25 20:03:49.980195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.797 [2024-07-25 20:03:49.988651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.797 [2024-07-25 20:03:49.988685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.797 [2024-07-25 20:03:49.988704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.797 [2024-07-25 20:03:49.996790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.797 [2024-07-25 20:03:49.996824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:49.996842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.005299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.005339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.005358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.013531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.013591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.013625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.021343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.021376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.021394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.029413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.029445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.029477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.037349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.037393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.037411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.045585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.045618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.045635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.053688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.053721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.053738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.062046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.062087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.062115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.070089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.070132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.070150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.079474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.079509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.079529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.088457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.088493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.088513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.096408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.096443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.096463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.104713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.104750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.104769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.112030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.112074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.112095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.119451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.119486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.119505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.123463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.123496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.123514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.130853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.130888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.130907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.137728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.137766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.137786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.143700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.143734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.143753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.150253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.150283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.150300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.156918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.156951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.156970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.163652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.163686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.163715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.170145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.170190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.170207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.176344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.176388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.182581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.798 [2024-07-25 20:03:50.182615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.798 [2024-07-25 20:03:50.182633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.798 [2024-07-25 20:03:50.189149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.799 [2024-07-25 20:03:50.189179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.799 [2024-07-25 20:03:50.189195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.799 [2024-07-25 20:03:50.195661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.799 [2024-07-25 20:03:50.195695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.799 [2024-07-25 20:03:50.195713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.799 [2024-07-25 20:03:50.202204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.799 [2024-07-25 20:03:50.202234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.799 [2024-07-25 20:03:50.202250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.799 [2024-07-25 20:03:50.208850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.799 [2024-07-25 20:03:50.208884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.799 [2024-07-25 20:03:50.208903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.799 [2024-07-25 20:03:50.215434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.799 [2024-07-25 20:03:50.215467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.799 [2024-07-25 20:03:50.215485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.799 [2024-07-25 20:03:50.222008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:40.799 [2024-07-25 20:03:50.222048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.799 [2024-07-25 20:03:50.222076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.056 [2024-07-25 20:03:50.228545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.056 [2024-07-25 20:03:50.228578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.056 [2024-07-25 20:03:50.228596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.056 [2024-07-25 20:03:50.234860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.056 [2024-07-25 20:03:50.234892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.056 [2024-07-25 20:03:50.234910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.056 [2024-07-25 20:03:50.241428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.056 [2024-07-25 20:03:50.241456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.056 [2024-07-25 20:03:50.241473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.056 [2024-07-25 20:03:50.247610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.056 [2024-07-25 20:03:50.247643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.056 [2024-07-25 20:03:50.247662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.056 [2024-07-25 20:03:50.254056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.056 [2024-07-25 20:03:50.254096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.056 [2024-07-25 20:03:50.254114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.260577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.260610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.260628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.267336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.267385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.267403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.271248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.271278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.271294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.276961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.276994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.277012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.284196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.284227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.284244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.291599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.291630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.291648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.298586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.298620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.298639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.305209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.305239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.305256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.311853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.311886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.311905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.318609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.318661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.325343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.325371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.325402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.331917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.331950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.338473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.338503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.338520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.345272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.345300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.345331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.351672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.351718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.351737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.358206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.358233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.358265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.364716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.364749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.364767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.371272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.371302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.371320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.377771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.377804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.377822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.384263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.384292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.384309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.390635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.390667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.390685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.396997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.397029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.397047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.403402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.403434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.403453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.409885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.409917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.409935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.416249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.416277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.416308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.422672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.422703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.057 [2024-07-25 20:03:50.422722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.057 [2024-07-25 20:03:50.428791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.057 [2024-07-25 20:03:50.428825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.428843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.435216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.435246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.435262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.441704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.441736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.441760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.448080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.448121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.448154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.454512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.454544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.454563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.461117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.461146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.461162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.467557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.467589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.467607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.474010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.474043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.474069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.058 [2024-07-25 20:03:50.480633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.058 [2024-07-25 20:03:50.480665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.058 [2024-07-25 20:03:50.480683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.487238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.487283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.487299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.493683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.493715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.493733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.500171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.500217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.500233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.506698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.506748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.513077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.513123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.513139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.519544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.519576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.519594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.525996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.526027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.526046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.533034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.533073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.533093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.541442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.541478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.541498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.549519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.549556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.549575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.557323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.557372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.557393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.563187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.563216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.563232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.569407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.569439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.569458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.575702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.575734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.575752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.582260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.582290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.582306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.316 [2024-07-25 20:03:50.588735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.316 [2024-07-25 20:03:50.588768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.316 [2024-07-25 20:03:50.588786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.595238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.595267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.595283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.601643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.601675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.601694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.608216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.608248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.608265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.614698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.614730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.614754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.621312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.621341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.621358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.627719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.627752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.627771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.634116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.634160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.634177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.640605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.640637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.640656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.647032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.647070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.647090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.653515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.653547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.653565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.659880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.659912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.659930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.666395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.666427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.666445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.672897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.672935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.672954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.679161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.679192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.679209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.685614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.685646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.685664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.692136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.692165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.692180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.698620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.698652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.698670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.705238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.705267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.705284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.711770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.711802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.711820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.718248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.718277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.718294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.724765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.724797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.724821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.731103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.731132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.731148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.737431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.737462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.737481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.317 [2024-07-25 20:03:50.743932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.317 [2024-07-25 20:03:50.743964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.317 [2024-07-25 20:03:50.743983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.750388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.750420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.750438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.756848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.756880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.763382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.763414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.763432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.769766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.769798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.769816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.776114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.776142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.776159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.782483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.782520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.782539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.788965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.788997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.789015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.795384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.795416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.795435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.801801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.801833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.801851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.808207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.808236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.808252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.814609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.814640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.814658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.820971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.821004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.821021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.827403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.827435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.827453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.833814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.833857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.833875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.576 [2024-07-25 20:03:50.840219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.576 [2024-07-25 20:03:50.840248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.576 [2024-07-25 20:03:50.840264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.846683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.846716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.846733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.853150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.853180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.853197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.859611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.859644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.859662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.866034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.866073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.866093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.872560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.872592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.879002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.879034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.879052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.885551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.885584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.885602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.577 [2024-07-25 20:03:50.891999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1563d50) 00:33:41.577 [2024-07-25 20:03:50.892031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.577 [2024-07-25 20:03:50.892056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.577 00:33:41.577 Latency(us) 00:33:41.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.577 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:41.577 nvme0n1 : 2.00 4615.91 576.99 0.00 0.00 3461.71 794.93 10048.85 00:33:41.577 =================================================================================================================== 00:33:41.577 Total : 4615.91 576.99 0.00 0.00 3461.71 794.93 10048.85 00:33:41.577 0 00:33:41.577 20:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:41.577 20:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:41.577 20:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:41.577 20:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:41.577 | .driver_specific 00:33:41.577 | .nvme_error 00:33:41.577 | .status_code 00:33:41.577 | .command_transient_transport_error' 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 297 > 0 )) 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4126881 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4126881 ']' 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4126881 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4126881 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4126881' 00:33:41.835 killing process with pid 4126881 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4126881 00:33:41.835 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.835 00:33:41.835 Latency(us) 00:33:41.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.835 =================================================================================================================== 00:33:41.835 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.835 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4126881 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4127343 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4127343 /var/tmp/bperf.sock 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4127343 ']' 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:42.093 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.093 [2024-07-25 20:03:51.429577] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:42.093 [2024-07-25 20:03:51.429652] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127343 ] 00:33:42.093 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.093 [2024-07-25 20:03:51.490279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.351 [2024-07-25 20:03:51.579032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.351 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.351 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:42.351 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.351 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.609 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:42.609 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.609 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.609 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.609 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.609 20:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.866 nvme0n1 00:33:42.866 20:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:42.866 20:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.866 20:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.866 20:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.866 20:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:42.866 20:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.124 Running I/O for 2 seconds... 00:33:43.124 [2024-07-25 20:03:52.405472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ee5c8 00:33:43.124 [2024-07-25 20:03:52.406504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.406548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.418896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7970 00:33:43.124 [2024-07-25 20:03:52.419754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.419786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.432357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e4de8 00:33:43.124 [2024-07-25 20:03:52.433372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.433402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.444485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f2d80 00:33:43.124 [2024-07-25 20:03:52.446262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.446292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.455470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fe720 00:33:43.124 [2024-07-25 20:03:52.456287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.456331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.469728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e88f8 00:33:43.124 [2024-07-25 20:03:52.470752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.470798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.482794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e2c28 00:33:43.124 [2024-07-25 20:03:52.483972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.484004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.494890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fd640 00:33:43.124 [2024-07-25 20:03:52.496072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.496115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.508160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f9f68 00:33:43.124 [2024-07-25 20:03:52.509465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.509497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.520017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190de8a8 00:33:43.124 [2024-07-25 20:03:52.520843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.520890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.532777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ea248 00:33:43.124 [2024-07-25 20:03:52.533504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.533533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.124 [2024-07-25 20:03:52.546065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f8e88 00:33:43.124 [2024-07-25 20:03:52.546896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-07-25 20:03:52.546925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.559347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:43.382 [2024-07-25 20:03:52.560345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.560374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.571237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f20d8 00:33:43.382 [2024-07-25 20:03:52.572972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.573003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.582051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190eff18 00:33:43.382 [2024-07-25 20:03:52.582858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.582902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.595285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e1710 00:33:43.382 [2024-07-25 20:03:52.596298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.596327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.608533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f46d0 00:33:43.382 [2024-07-25 20:03:52.609688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.609716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.621809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fc560 00:33:43.382 [2024-07-25 20:03:52.623170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.623198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.635067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f4b08 00:33:43.382 [2024-07-25 20:03:52.636622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.648429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fa7d8 00:33:43.382 [2024-07-25 20:03:52.650109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.650137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.661691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0788 00:33:43.382 [2024-07-25 20:03:52.663609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.663653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.674997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fdeb0 00:33:43.382 [2024-07-25 20:03:52.677086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.677115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.684091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ef6a8 00:33:43.382 [2024-07-25 20:03:52.684937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.684984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.696043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190dfdc0 00:33:43.382 [2024-07-25 20:03:52.696861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.696906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.709340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e2c28 00:33:43.382 [2024-07-25 20:03:52.710341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.710383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.722733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e6300 00:33:43.382 [2024-07-25 20:03:52.723949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.724004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.736069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e88f8 00:33:43.382 [2024-07-25 20:03:52.737374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.737417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.748575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ec408 00:33:43.382 [2024-07-25 20:03:52.750306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.750336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.761743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ea248 00:33:43.382 [2024-07-25 20:03:52.763717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.763749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.772656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190eee38 00:33:43.382 [2024-07-25 20:03:52.773641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.773667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.786745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6458 00:33:43.382 [2024-07-25 20:03:52.787924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.787954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.382 [2024-07-25 20:03:52.799805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e3060 00:33:43.382 [2024-07-25 20:03:52.801147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.382 [2024-07-25 20:03:52.801174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.813111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6cc8 00:33:43.641 [2024-07-25 20:03:52.814678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.814706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.825179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fc998 00:33:43.641 [2024-07-25 20:03:52.826667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.826694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.838410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e88f8 00:33:43.641 [2024-07-25 20:03:52.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.840124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.851635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e0630 00:33:43.641 [2024-07-25 20:03:52.853453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.853490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.864851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7538 00:33:43.641 [2024-07-25 20:03:52.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.866879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.873885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f5378 00:33:43.641 [2024-07-25 20:03:52.874703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.874735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.888332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e3060 00:33:43.641 [2024-07-25 20:03:52.890287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.890316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.899233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5ec8 00:33:43.641 [2024-07-25 20:03:52.900218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.900246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.912580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0bc0 00:33:43.641 [2024-07-25 20:03:52.913746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.913775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.925853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ed4e8 00:33:43.641 [2024-07-25 20:03:52.927244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.927277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.939199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e4de8 00:33:43.641 [2024-07-25 20:03:52.940683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.940715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.952462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e0a68 00:33:43.641 [2024-07-25 20:03:52.954118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.954145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.965675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e3498 00:33:43.641 [2024-07-25 20:03:52.967500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.967533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.978917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fe720 00:33:43.641 [2024-07-25 20:03:52.980957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.980985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:52.988024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7100 00:33:43.641 [2024-07-25 20:03:52.988836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:52.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:53.002624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e4140 00:33:43.641 [2024-07-25 20:03:53.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:53.004134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.641 [2024-07-25 20:03:53.015863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f8a50 00:33:43.641 [2024-07-25 20:03:53.017534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.641 [2024-07-25 20:03:53.017561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.642 [2024-07-25 20:03:53.027701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190df550 00:33:43.642 [2024-07-25 20:03:53.028838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.642 [2024-07-25 20:03:53.028882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.642 [2024-07-25 20:03:53.040471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f5be8 00:33:43.642 [2024-07-25 20:03:53.041452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.642 [2024-07-25 20:03:53.041483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.642 [2024-07-25 20:03:53.053619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f92c0 00:33:43.642 [2024-07-25 20:03:53.054779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.642 [2024-07-25 20:03:53.054807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.642 [2024-07-25 20:03:53.065610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ff3c8 00:33:43.642 [2024-07-25 20:03:53.067543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.642 [2024-07-25 20:03:53.067575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.076531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e6fa8 00:33:43.900 [2024-07-25 20:03:53.077504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.090617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f9b30 00:33:43.900 [2024-07-25 20:03:53.091787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.091833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.103672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f4298 00:33:43.900 [2024-07-25 20:03:53.105006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.105033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.115643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190feb58 00:33:43.900 [2024-07-25 20:03:53.116934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.116965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.128900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190de038 00:33:43.900 [2024-07-25 20:03:53.130386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.130417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.142170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e4140 00:33:43.900 [2024-07-25 20:03:53.143846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.143873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.155474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e1b48 00:33:43.900 [2024-07-25 20:03:53.157286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.157313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.168666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f9f68 00:33:43.900 [2024-07-25 20:03:53.170649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.170677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.181996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0bc0 00:33:43.900 [2024-07-25 20:03:53.184244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.184272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.191082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5220 00:33:43.900 [2024-07-25 20:03:53.192085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.192130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.204435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ea248 00:33:43.900 [2024-07-25 20:03:53.205566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.205598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.218849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6458 00:33:43.900 [2024-07-25 20:03:53.220650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.220682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.232210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5ec8 00:33:43.900 [2024-07-25 20:03:53.234243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.234289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.245471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6020 00:33:43.900 [2024-07-25 20:03:53.247628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.247656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.254413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e7c50 00:33:43.900 [2024-07-25 20:03:53.255361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.900 [2024-07-25 20:03:53.255403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.900 [2024-07-25 20:03:53.266286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190df988 00:33:43.901 [2024-07-25 20:03:53.267255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.901 [2024-07-25 20:03:53.267282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.901 [2024-07-25 20:03:53.279607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ff3c8 00:33:43.901 [2024-07-25 20:03:53.280714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.901 [2024-07-25 20:03:53.280740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.901 [2024-07-25 20:03:53.292919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:43.901 [2024-07-25 20:03:53.294252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.901 [2024-07-25 20:03:53.294285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.901 [2024-07-25 20:03:53.306183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ee5c8 00:33:43.901 [2024-07-25 20:03:53.307648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.901 [2024-07-25 20:03:53.307675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.901 [2024-07-25 20:03:53.319349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e3d08 00:33:43.901 [2024-07-25 20:03:53.321008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.901 [2024-07-25 20:03:53.321035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.332692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e12d8 00:33:44.161 [2024-07-25 20:03:53.334521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.334567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.345994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fda78 00:33:44.161 [2024-07-25 20:03:53.348000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.348026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.359274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f2510 00:33:44.161 [2024-07-25 20:03:53.361462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.361490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.368273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e6738 00:33:44.161 [2024-07-25 20:03:53.369262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.369304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.380262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fb480 00:33:44.161 [2024-07-25 20:03:53.381248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.381291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.393554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6458 00:33:44.161 [2024-07-25 20:03:53.394690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.394716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.406776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:44.161 [2024-07-25 20:03:53.408096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.408124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.161 [2024-07-25 20:03:53.419997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7538 00:33:44.161 [2024-07-25 20:03:53.421406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.161 [2024-07-25 20:03:53.421438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.433218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e99d8 00:33:44.162 [2024-07-25 20:03:53.434891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.434919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.446568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6020 00:33:44.162 [2024-07-25 20:03:53.448357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.448400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.459720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5ec8 00:33:44.162 [2024-07-25 20:03:53.461704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.461731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.473057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ff3c8 00:33:44.162 [2024-07-25 20:03:53.475240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.475267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.482172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e7c50 00:33:44.162 [2024-07-25 20:03:53.483137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.483166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.495529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0ff8 00:33:44.162 [2024-07-25 20:03:53.496711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.496763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.507554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f9b30 00:33:44.162 [2024-07-25 20:03:53.508658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.508704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.520835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:44.162 [2024-07-25 20:03:53.522147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.522174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.534020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e8088 00:33:44.162 [2024-07-25 20:03:53.535507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.535535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.547208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fe2e8 00:33:44.162 [2024-07-25 20:03:53.548883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.548910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.560578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f5378 00:33:44.162 [2024-07-25 20:03:53.562454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.562496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.572421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ea248 00:33:44.162 [2024-07-25 20:03:53.573725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.573758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.162 [2024-07-25 20:03:53.585303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0788 00:33:44.162 [2024-07-25 20:03:53.586472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.162 [2024-07-25 20:03:53.586501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.597401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e9168 00:33:44.419 [2024-07-25 20:03:53.599340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.599385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.608326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190dece0 00:33:44.419 [2024-07-25 20:03:53.609300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.609343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.621591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6020 00:33:44.419 [2024-07-25 20:03:53.622726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.622758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.634936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:44.419 [2024-07-25 20:03:53.636253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.636296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.648224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e2c28 00:33:44.419 [2024-07-25 20:03:53.649693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.649721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.661405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e23b8 00:33:44.419 [2024-07-25 20:03:53.663067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.663094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.674641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fc998 00:33:44.419 [2024-07-25 20:03:53.676460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.676492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.687863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190eaef0 00:33:44.419 [2024-07-25 20:03:53.689870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.689897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.701155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e99d8 00:33:44.419 [2024-07-25 20:03:53.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.703364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.710179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e7c50 00:33:44.419 [2024-07-25 20:03:53.711143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.711172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.723533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7da8 00:33:44.419 [2024-07-25 20:03:53.724678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.724705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.735593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f5378 00:33:44.419 [2024-07-25 20:03:53.736715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.736743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.748858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:44.419 [2024-07-25 20:03:53.750175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.750203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.762043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ed0b0 00:33:44.419 [2024-07-25 20:03:53.763521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.763548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.775259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0350 00:33:44.419 [2024-07-25 20:03:53.776920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.776947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.788517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e1710 00:33:44.419 [2024-07-25 20:03:53.790356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.790401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.801811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fda78 00:33:44.419 [2024-07-25 20:03:53.803810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.803853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.812010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6458 00:33:44.419 [2024-07-25 20:03:53.813323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.813366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.825278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7538 00:33:44.419 [2024-07-25 20:03:53.826754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.826781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.419 [2024-07-25 20:03:53.838619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:44.419 [2024-07-25 20:03:53.840261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.419 [2024-07-25 20:03:53.840304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.851934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190df550 00:33:44.678 [2024-07-25 20:03:53.853749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.853776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.865167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fe2e8 00:33:44.678 [2024-07-25 20:03:53.867168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.867210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.878425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e23b8 00:33:44.678 [2024-07-25 20:03:53.880576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.880603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.887409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fb048 00:33:44.678 [2024-07-25 20:03:53.888358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.888400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.900724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f6020 00:33:44.678 [2024-07-25 20:03:53.901863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.901890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.915145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f4b08 00:33:44.678 [2024-07-25 20:03:53.916948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.916975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.926890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190efae0 00:33:44.678 [2024-07-25 20:03:53.928127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.928156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.938515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ec840 00:33:44.678 [2024-07-25 20:03:53.939790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.939817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.951886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f4f40 00:33:44.678 [2024-07-25 20:03:53.953334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.953380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.965136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190eff18 00:33:44.678 [2024-07-25 20:03:53.966735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.966762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.978418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5ec8 00:33:44.678 [2024-07-25 20:03:53.980267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.980294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:53.991687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190dece0 00:33:44.678 [2024-07-25 20:03:53.993625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:53.993652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:54.004900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ebfd0 00:33:44.678 [2024-07-25 20:03:54.007035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:54.007069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:54.014125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ed920 00:33:44.678 [2024-07-25 20:03:54.015066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:54.015093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:54.027309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5658 00:33:44.678 [2024-07-25 20:03:54.028411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.678 [2024-07-25 20:03:54.028453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.678 [2024-07-25 20:03:54.039190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ee190 00:33:44.679 [2024-07-25 20:03:54.040286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.679 [2024-07-25 20:03:54.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.679 [2024-07-25 20:03:54.052337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f4f40 00:33:44.679 [2024-07-25 20:03:54.053606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.679 [2024-07-25 20:03:54.053633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.679 [2024-07-25 20:03:54.065587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f7970 00:33:44.679 [2024-07-25 20:03:54.067029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.679 [2024-07-25 20:03:54.067069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.679 [2024-07-25 20:03:54.078849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ddc00 00:33:44.679 [2024-07-25 20:03:54.080447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.679 [2024-07-25 20:03:54.080478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.679 [2024-07-25 20:03:54.092127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ee190 00:33:44.679 [2024-07-25 20:03:54.093896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.679 [2024-07-25 20:03:54.093923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.679 [2024-07-25 20:03:54.105387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fa3a0 00:33:44.679 [2024-07-25 20:03:54.107400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.679 [2024-07-25 20:03:54.107429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.117289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e99d8 00:33:44.938 [2024-07-25 20:03:54.118754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.118780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.128882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190df550 00:33:44.938 [2024-07-25 20:03:54.130774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.130805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.139756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e01f8 00:33:44.938 [2024-07-25 20:03:54.140688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.140715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.152980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5ec8 00:33:44.938 [2024-07-25 20:03:54.154069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.154101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.166217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5658 00:33:44.938 [2024-07-25 20:03:54.167505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.167532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.179396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f2510 00:33:44.938 [2024-07-25 20:03:54.180865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.180896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.192611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190eaab8 00:33:44.938 [2024-07-25 20:03:54.194245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.194288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.205847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e5ec8 00:33:44.938 [2024-07-25 20:03:54.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.207666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.219130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fac10 00:33:44.938 [2024-07-25 20:03:54.221097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.221124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.232349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190de8a8 00:33:44.938 [2024-07-25 20:03:54.234480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.234511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.241279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fef90 00:33:44.938 [2024-07-25 20:03:54.242207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.242248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.254677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e4de8 00:33:44.938 [2024-07-25 20:03:54.255774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.255802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.269070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e4578 00:33:44.938 [2024-07-25 20:03:54.270821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.270849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.282280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f3a28 00:33:44.938 [2024-07-25 20:03:54.284269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.284311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.295539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f0bc0 00:33:44.938 [2024-07-25 20:03:54.297617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.297649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.304504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190f57b0 00:33:44.938 [2024-07-25 20:03:54.305426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.305458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.316488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ee5c8 00:33:44.938 [2024-07-25 20:03:54.317378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.317420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.329706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e49b0 00:33:44.938 [2024-07-25 20:03:54.330802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.330829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.342932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e1b48 00:33:44.938 [2024-07-25 20:03:54.344246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.344288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.938 [2024-07-25 20:03:54.356160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fdeb0 00:33:44.938 [2024-07-25 20:03:54.357604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.938 [2024-07-25 20:03:54.357631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:45.197 [2024-07-25 20:03:54.369369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190ef6a8 00:33:45.197 [2024-07-25 20:03:54.371015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.197 [2024-07-25 20:03:54.371044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:45.197 [2024-07-25 20:03:54.382642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190e49b0 00:33:45.197 [2024-07-25 20:03:54.384472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.197 [2024-07-25 20:03:54.384514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:45.197 [2024-07-25 20:03:54.394023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabbc0) with pdu=0x2000190fe720 00:33:45.197 [2024-07-25 20:03:54.394946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.197 [2024-07-25 20:03:54.394977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:45.197 00:33:45.197 Latency(us) 00:33:45.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.197 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.197 nvme0n1 : 2.00 20109.08 78.55 0.00 0.00 6354.29 2597.17 15825.73 00:33:45.197 =================================================================================================================== 00:33:45.197 Total : 20109.08 78.55 0.00 0.00 6354.29 2597.17 15825.73 00:33:45.197 0 00:33:45.197 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:45.197 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:45.197 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:45.197 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:45.197 | .driver_specific 00:33:45.197 | .nvme_error 00:33:45.197 | .status_code 00:33:45.197 | .command_transient_transport_error' 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4127343 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4127343 ']' 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4127343 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4127343 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4127343' 00:33:45.455 killing process with pid 4127343 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4127343 00:33:45.455 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.455 00:33:45.455 Latency(us) 00:33:45.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.455 =================================================================================================================== 00:33:45.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.455 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4127343 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4127751 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4127751 /var/tmp/bperf.sock 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4127751 ']' 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:45.714 20:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.714 [2024-07-25 20:03:54.965804] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:45.714 [2024-07-25 20:03:54.965875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127751 ] 00:33:45.714 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.714 Zero copy mechanism will not be used. 00:33:45.714 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.714 [2024-07-25 20:03:55.026892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.714 [2024-07-25 20:03:55.118460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.972 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:45.972 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:45.972 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:45.972 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:46.230 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:46.230 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.230 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.230 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.230 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.230 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.489 nvme0n1 00:33:46.489 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:46.489 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.489 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.489 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.489 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:46.489 20:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:46.489 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.489 Zero copy mechanism will not be used. 00:33:46.489 Running I/O for 2 seconds... 00:33:46.489 [2024-07-25 20:03:55.908663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.489 [2024-07-25 20:03:55.908979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.489 [2024-07-25 20:03:55.909020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.489 [2024-07-25 20:03:55.915520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.489 [2024-07-25 20:03:55.916437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.489 [2024-07-25 20:03:55.916502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.922333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.922828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.923027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.929341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.930313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.930343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.936363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.943083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.943532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.943746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.950003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.950820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.950949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.956928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.957585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.957619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.963876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.964678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.964741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.970531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.971208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.971237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.977245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.977949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.977982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.984229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.985206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.985236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.990913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.991670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.991805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:55.997885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:55.998537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:55.998570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:56.004683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:56.005257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:56.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:56.011572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:56.012166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:56.012585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:56.018299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:56.019028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:56.019111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:56.025217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.749 [2024-07-25 20:03:56.025853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.749 [2024-07-25 20:03:56.025894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.749 [2024-07-25 20:03:56.032181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.032931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.032964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.039279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.039700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.039764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.046353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.047030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.047071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.053089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.053509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.053542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.059831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.060542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.060743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.066676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.067558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.067591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.073682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.074189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.074220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.080417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.080821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.080886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.087213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.087927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.088072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.094087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.094887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.094920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.100440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.101158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.101214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.107288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.108000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.108033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.114159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.114631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.114846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.121151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.121660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.121800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.127894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.128523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.128618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.134657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.135522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.135560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.141490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.142173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.142203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.148718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.149587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.149620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.155273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.155983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.156016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.161860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.750 [2024-07-25 20:03:56.162306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.750 [2024-07-25 20:03:56.162336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.750 [2024-07-25 20:03:56.168638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.751 [2024-07-25 20:03:56.169070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.751 [2024-07-25 20:03:56.169219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.751 [2024-07-25 20:03:56.175050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:46.751 [2024-07-25 20:03:56.175520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.751 [2024-07-25 20:03:56.175732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.181681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.182425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.182468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.188773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.189450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.189661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.195453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.196122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.196152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.202145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.202868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.203067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.208780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.209498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.216168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.217145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.217174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.223052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.223832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.223960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.229622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.230323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.230353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.236286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.236946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.237011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.242989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.243513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.243684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.249925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.250660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.250731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.256762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.257314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.257344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.010 [2024-07-25 20:03:56.263354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.010 [2024-07-25 20:03:56.263783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.010 [2024-07-25 20:03:56.264012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.270225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.270827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.270859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.276793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.277577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.277647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.283630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.284246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.284494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.290519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.291201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.291278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.297130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.297579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.297611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.303850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.304801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.304834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.310494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.311251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.311305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.317526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.318511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.318544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.324287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.325091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.325253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.331189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.331582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.337795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.338516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.338730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.344476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.345153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.345249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.351220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.352174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.352204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.357905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.358716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.358854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.364593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.365137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.365430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.371594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.371978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.372010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.378274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.379215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.379251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.384720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.385410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.385471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.391564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.392082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.392205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.011 [2024-07-25 20:03:56.398465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.011 [2024-07-25 20:03:56.399104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.011 [2024-07-25 20:03:56.399151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.012 [2024-07-25 20:03:56.405091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.012 [2024-07-25 20:03:56.405820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.012 [2024-07-25 20:03:56.405853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.012 [2024-07-25 20:03:56.412121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.012 [2024-07-25 20:03:56.412765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.012 [2024-07-25 20:03:56.412859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.012 [2024-07-25 20:03:56.418764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.012 [2024-07-25 20:03:56.419453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.012 [2024-07-25 20:03:56.419494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.012 [2024-07-25 20:03:56.425669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.012 [2024-07-25 20:03:56.426646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.012 [2024-07-25 20:03:56.426778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.012 [2024-07-25 20:03:56.432605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.012 [2024-07-25 20:03:56.433172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.012 [2024-07-25 20:03:56.433228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.439336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.440219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.440258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.446080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.446879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.446912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.452698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.453182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.453212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.459311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.459773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.459806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.465994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.466526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.466560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.472617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.473321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.473514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.479284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.479837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.479870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.485759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.486554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.486587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.492665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.493387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.493449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.499496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.500281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.500310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.506257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.506957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.506995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.513099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.513629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.271 [2024-07-25 20:03:56.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.271 [2024-07-25 20:03:56.519960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.271 [2024-07-25 20:03:56.520857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.520889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.526512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.527089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.527238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.533383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.534139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.534285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.539979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.540726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.540842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.546680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.547203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.547237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.553386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.554193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.554223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.560209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.560801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.560964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.566978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.567657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.567777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.573778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.574492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.580431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.581124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.581184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.587454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.588028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.588096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.594339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.594920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.595278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.600922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.601608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.601641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.607961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.608414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.608581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.614610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.615285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.615359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.621486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.622165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.622195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.628315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.629072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.629102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.634774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.635496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.635584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.641741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.642563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.642673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.648299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.649039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.649080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.272 [2024-07-25 20:03:56.654970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.272 [2024-07-25 20:03:56.655755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.272 [2024-07-25 20:03:56.655784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.273 [2024-07-25 20:03:56.661678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.273 [2024-07-25 20:03:56.662282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.273 [2024-07-25 20:03:56.662420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.273 [2024-07-25 20:03:56.668195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.273 [2024-07-25 20:03:56.668979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.273 [2024-07-25 20:03:56.669164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.273 [2024-07-25 20:03:56.675024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.273 [2024-07-25 20:03:56.675635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.273 [2024-07-25 20:03:56.675808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.273 [2024-07-25 20:03:56.681403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.273 [2024-07-25 20:03:56.682015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.273 [2024-07-25 20:03:56.682047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.273 [2024-07-25 20:03:56.688224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.273 [2024-07-25 20:03:56.688807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.273 [2024-07-25 20:03:56.688990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.273 [2024-07-25 20:03:56.694701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.273 [2024-07-25 20:03:56.695496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.273 [2024-07-25 20:03:56.695530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.701103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.701873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.701905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.707792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.708274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.714453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.715092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.715189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.721334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.722284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.722353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.728187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.728934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.728967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.735066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.735597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.735660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.741900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.742491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.742525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.748683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.749214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.749318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.755636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.756240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.756295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.762456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.762992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.763095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.769252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.770245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.770335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.775881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.776328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.776560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.782678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.783231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.783389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.789381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.789993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.790101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.796129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.796603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.796636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.803207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.803633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.803771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.810137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.810967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.811005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.816759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.817536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.817666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.823784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.824475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.824645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.830569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.531 [2024-07-25 20:03:56.831289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.531 [2024-07-25 20:03:56.831512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.531 [2024-07-25 20:03:56.837234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.837988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.838163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.844200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.844810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.844870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.850963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.851743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.851884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.857948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.858685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.858718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.864701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.865404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.865557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.871496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.872134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.872258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.878208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.878801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.878837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.885034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.885675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.885811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.891502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.891963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.892280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.898136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.898613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.898987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.904953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.905384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.905643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.912130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.912696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.912728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.918946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.919572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.919643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.925789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.926366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.926502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.932546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.933381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.933535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.939376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.939924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.939999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.946094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.946756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.946789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.953175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.532 [2024-07-25 20:03:56.953694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.532 [2024-07-25 20:03:56.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.532 [2024-07-25 20:03:56.959862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.789 [2024-07-25 20:03:56.960699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.789 [2024-07-25 20:03:56.960817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.789 [2024-07-25 20:03:56.966829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.789 [2024-07-25 20:03:56.967198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.789 [2024-07-25 20:03:56.967234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.789 [2024-07-25 20:03:56.973316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.789 [2024-07-25 20:03:56.974037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.789 [2024-07-25 20:03:56.974079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.789 [2024-07-25 20:03:56.980290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:56.981156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:56.981205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:56.987056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:56.987808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:56.987869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:56.993849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:56.994503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:56.994535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.000532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.001336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.001457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.007032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.007663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.007726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.013766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.014194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.014295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.020340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.020905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.021091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.027385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.027999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.028032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.034228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.034813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.034956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.040860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.041695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.041816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.047659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.048650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.054201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.055207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.055237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.060941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.061398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.061549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.067758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.068637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.068808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.074498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.075162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.075235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.081048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.081411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.081506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.087835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.088304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.088489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.094721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.095399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.095595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.101629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.102292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.102452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.108269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.108953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.108985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.115144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.790 [2024-07-25 20:03:57.115779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.790 [2024-07-25 20:03:57.115974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.790 [2024-07-25 20:03:57.121900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.122536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.122807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.128827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.129582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.129642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.136023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.136409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.136472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.142909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.143842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.143914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.149455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.149931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.150045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.156146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.156760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.156858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.163008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.163871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.163976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.169822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.170677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.170710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.176490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.177417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.177451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.183325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.183845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.183887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.189946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.190640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.190831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.196540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.197400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.197434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.203248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.203874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.203907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.210165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.210783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.210937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.791 [2024-07-25 20:03:57.216914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:47.791 [2024-07-25 20:03:57.217346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.791 [2024-07-25 20:03:57.217395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.223751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.224570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.224604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.230516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.231043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.231289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.237028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.237729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.237763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.243750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.244240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.244270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.250750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.251732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.251765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.257397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.258233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.258263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.264002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.264479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.264513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.270666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.271545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.271578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.277618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.278017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.278418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.284268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.285049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.285090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.290956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.291681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.291715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.297622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.298543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.050 [2024-07-25 20:03:57.298572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.050 [2024-07-25 20:03:57.304527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.050 [2024-07-25 20:03:57.305438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.305589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.311473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.311974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.312002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.317959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.318680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.318803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.324653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.325322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.325442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.331192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.331907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.331970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.337796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.338640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.338673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.344807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.345649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.345772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.351274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.351992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.352025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.358313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.358763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.359093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.365047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.365497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.365671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.371559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.372559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.372592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.378179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.378737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.378801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.385101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.385605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.385670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.392510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.392801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.392834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.399755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.400312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.400438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.407300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.407679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.407712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.414795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.414975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.415198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.422518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.422865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.422973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.429951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.430290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.430324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.437759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.438496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.051 [2024-07-25 20:03:57.445595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.051 [2024-07-25 20:03:57.445842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.051 [2024-07-25 20:03:57.445874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.052 [2024-07-25 20:03:57.453038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.052 [2024-07-25 20:03:57.453675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.052 [2024-07-25 20:03:57.453805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.052 [2024-07-25 20:03:57.460721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.052 [2024-07-25 20:03:57.461226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.052 [2024-07-25 20:03:57.461256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.052 [2024-07-25 20:03:57.468288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.052 [2024-07-25 20:03:57.468734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.052 [2024-07-25 20:03:57.468887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.052 [2024-07-25 20:03:57.475962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.052 [2024-07-25 20:03:57.476450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.052 [2024-07-25 20:03:57.476483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.310 [2024-07-25 20:03:57.482784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.310 [2024-07-25 20:03:57.483595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.310 [2024-07-25 20:03:57.483739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.310 [2024-07-25 20:03:57.490305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.310 [2024-07-25 20:03:57.490755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.310 [2024-07-25 20:03:57.490787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.310 [2024-07-25 20:03:57.497938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.310 [2024-07-25 20:03:57.498281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.310 [2024-07-25 20:03:57.498309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.310 [2024-07-25 20:03:57.505249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.310 [2024-07-25 20:03:57.505560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.310 [2024-07-25 20:03:57.505601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.310 [2024-07-25 20:03:57.512621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.310 [2024-07-25 20:03:57.513112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.310 [2024-07-25 20:03:57.513143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.310 [2024-07-25 20:03:57.519969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.520330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.520360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.526944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.527488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.527521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.534294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.534788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.534873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.541885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.542294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.542386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.549152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.549443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.549476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.556680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.556984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.557020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.564065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.564747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.564871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.571391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.571754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.572002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.579043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.579394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.579499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.586534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.586774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.586998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.594279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.594603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.594737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.601740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.601992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.602025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.609598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.610207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.610236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.617493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.618003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.618205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.624943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.625505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.625597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.632439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.632889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.632981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.639930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.640380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.640606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.647580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.647875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.647904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.654772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.655393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.661491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.662279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.311 [2024-07-25 20:03:57.662334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.311 [2024-07-25 20:03:57.668321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.311 [2024-07-25 20:03:57.668967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.669036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.675127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.675960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.676026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.681770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.682308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.682382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.688632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.689363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.689412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.695798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.696254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.696479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.702773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.703312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.703417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.709791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.710302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.716723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.717218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.717248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.723508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.724299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.724329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.730145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.730825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.730868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.312 [2024-07-25 20:03:57.736963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.312 [2024-07-25 20:03:57.737498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.312 [2024-07-25 20:03:57.737566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.743819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.744540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.744573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.750834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.751752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.751785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.757857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.758430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.758615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.764511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.764998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.765067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.771600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.772184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.772261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.778453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.779070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.779103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.785279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.785846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.785988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.792097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.792775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.792808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.799140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.799652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.806165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.807180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.813075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.813911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.571 [2024-07-25 20:03:57.813943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.571 [2024-07-25 20:03:57.819707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.571 [2024-07-25 20:03:57.820408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.820493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.826575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.827213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.827313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.833677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.834247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.834276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.840802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.841491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.841555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.847497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.848284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.848342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.854538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.855029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.861423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.861979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.862012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.868531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.869163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.869217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.875632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.876211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.876283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.882370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.882913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.882974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.889375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.890012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.890124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.896200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.896708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.896919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.572 [2024-07-25 20:03:57.902870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdabe90) with pdu=0x2000190fef90 00:33:48.572 [2024-07-25 20:03:57.903454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.572 [2024-07-25 20:03:57.903572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.572 00:33:48.572 Latency(us) 00:33:48.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.572 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:48.572 nvme0n1 : 2.00 4506.86 563.36 0.00 0.00 3531.30 2560.76 8835.22 00:33:48.572 =================================================================================================================== 00:33:48.572 Total : 4506.86 563.36 0.00 0.00 3531.30 2560.76 8835.22 00:33:48.572 0 00:33:48.572 20:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:48.572 20:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:48.572 20:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:48.572 20:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:48.572 | .driver_specific 00:33:48.572 | .nvme_error 00:33:48.572 | .status_code 00:33:48.572 | .command_transient_transport_error' 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 291 > 0 )) 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4127751 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4127751 ']' 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4127751 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4127751 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4127751' 00:33:48.830 killing process with pid 4127751 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4127751 00:33:48.830 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.830 00:33:48.830 Latency(us) 00:33:48.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.830 =================================================================================================================== 00:33:48.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.830 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4127751 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4126393 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4126393 ']' 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4126393 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4126393 00:33:49.088 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:49.089 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:49.089 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4126393' 00:33:49.089 killing process with pid 4126393 00:33:49.089 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4126393 00:33:49.089 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4126393 00:33:49.348 00:33:49.348 real 0m14.774s 00:33:49.348 user 0m27.878s 00:33:49.348 sys 0m4.241s 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.348 ************************************ 00:33:49.348 END TEST nvmf_digest_error 00:33:49.348 ************************************ 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:49.348 rmmod nvme_tcp 00:33:49.348 rmmod nvme_fabrics 00:33:49.348 rmmod nvme_keyring 00:33:49.348 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 4126393 ']' 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 4126393 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 4126393 ']' 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 4126393 00:33:49.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4126393) - No such process 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 4126393 is not found' 00:33:49.608 Process with pid 4126393 is not found 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:49.608 20:03:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.514 20:04:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:51.514 00:33:51.514 real 0m34.292s 00:33:51.514 user 0m58.312s 00:33:51.514 sys 0m10.146s 00:33:51.514 20:04:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:51.514 20:04:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.514 ************************************ 00:33:51.514 END TEST nvmf_digest 00:33:51.514 ************************************ 00:33:51.514 20:04:00 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:51.514 20:04:00 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:51.514 20:04:00 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:51.514 20:04:00 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:51.514 20:04:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:51.514 20:04:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:51.514 20:04:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.514 ************************************ 00:33:51.514 START TEST nvmf_bdevperf 00:33:51.514 ************************************ 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:51.515 * Looking for test storage... 00:33:51.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:51.515 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:51.774 20:04:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.692 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:53.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:53.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:53.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:53.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:53.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:33:53.693 00:33:53.693 --- 10.0.0.2 ping statistics --- 00:33:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.693 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:33:53.693 00:33:53.693 --- 10.0.0.1 ping statistics --- 00:33:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.693 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4130097 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4130097 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 4130097 ']' 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.693 20:04:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.694 [2024-07-25 20:04:02.973097] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:53.694 [2024-07-25 20:04:02.973181] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.694 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.694 [2024-07-25 20:04:03.043612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:53.981 [2024-07-25 20:04:03.134236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.981 [2024-07-25 20:04:03.134286] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.981 [2024-07-25 20:04:03.134301] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.981 [2024-07-25 20:04:03.134318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.981 [2024-07-25 20:04:03.134330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.981 [2024-07-25 20:04:03.137096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.981 [2024-07-25 20:04:03.137177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.981 [2024-07-25 20:04:03.137180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.981 [2024-07-25 20:04:03.286601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.981 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.981 Malloc0 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.982 [2024-07-25 20:04:03.349016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:53.982 { 00:33:53.982 "params": { 00:33:53.982 "name": "Nvme$subsystem", 00:33:53.982 "trtype": "$TEST_TRANSPORT", 00:33:53.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.982 "adrfam": "ipv4", 00:33:53.982 "trsvcid": "$NVMF_PORT", 00:33:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.982 "hdgst": ${hdgst:-false}, 00:33:53.982 "ddgst": ${ddgst:-false} 00:33:53.982 }, 00:33:53.982 "method": "bdev_nvme_attach_controller" 00:33:53.982 } 00:33:53.982 EOF 00:33:53.982 )") 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:53.982 20:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:53.982 "params": { 00:33:53.982 "name": "Nvme1", 00:33:53.982 "trtype": "tcp", 00:33:53.982 "traddr": "10.0.0.2", 00:33:53.982 "adrfam": "ipv4", 00:33:53.982 "trsvcid": "4420", 00:33:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.982 "hdgst": false, 00:33:53.982 "ddgst": false 00:33:53.982 }, 00:33:53.982 "method": "bdev_nvme_attach_controller" 00:33:53.982 }' 00:33:53.982 [2024-07-25 20:04:03.397410] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:53.982 [2024-07-25 20:04:03.397474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130125 ] 00:33:54.240 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.240 [2024-07-25 20:04:03.457487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.240 [2024-07-25 20:04:03.547443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.497 Running I/O for 1 seconds... 00:33:55.431 00:33:55.431 Latency(us) 00:33:55.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.431 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:55.431 Verification LBA range: start 0x0 length 0x4000 00:33:55.431 Nvme1n1 : 1.01 9122.75 35.64 0.00 0.00 13970.82 3252.53 15243.19 00:33:55.431 =================================================================================================================== 00:33:55.431 Total : 9122.75 35.64 0.00 0.00 13970.82 3252.53 15243.19 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4130325 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:55.689 { 00:33:55.689 "params": { 00:33:55.689 "name": "Nvme$subsystem", 00:33:55.689 "trtype": "$TEST_TRANSPORT", 00:33:55.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.689 "adrfam": "ipv4", 00:33:55.689 "trsvcid": "$NVMF_PORT", 00:33:55.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.689 "hdgst": ${hdgst:-false}, 00:33:55.689 "ddgst": ${ddgst:-false} 00:33:55.689 }, 00:33:55.689 "method": "bdev_nvme_attach_controller" 00:33:55.689 } 00:33:55.689 EOF 00:33:55.689 )") 00:33:55.689 20:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:55.689 20:04:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:55.689 20:04:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:55.689 20:04:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:55.689 "params": { 00:33:55.689 "name": "Nvme1", 00:33:55.689 "trtype": "tcp", 00:33:55.689 "traddr": "10.0.0.2", 00:33:55.689 "adrfam": "ipv4", 00:33:55.689 "trsvcid": "4420", 00:33:55.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.689 "hdgst": false, 00:33:55.689 "ddgst": false 00:33:55.689 }, 00:33:55.690 "method": "bdev_nvme_attach_controller" 00:33:55.690 }' 00:33:55.690 [2024-07-25 20:04:05.042468] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:55.690 [2024-07-25 20:04:05.042546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130325 ] 00:33:55.690 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.690 [2024-07-25 20:04:05.105932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.948 [2024-07-25 20:04:05.194307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.206 Running I/O for 15 seconds... 00:33:58.740 20:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4130097 00:33:58.740 20:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:58.740 [2024-07-25 20:04:08.009716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.009769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.009803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.009822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.009843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.009861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.009881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.009899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.009918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.009936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.009954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.009971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.009989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.740 [2024-07-25 20:04:08.010546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.740 [2024-07-25 20:04:08.010563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.010975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.010992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.741 [2024-07-25 20:04:08.011573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.741 [2024-07-25 20:04:08.011591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.011969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.011986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.742 [2024-07-25 20:04:08.012002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.742 [2024-07-25 20:04:08.012035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.742 [2024-07-25 20:04:08.012567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.742 [2024-07-25 20:04:08.012584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.012831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.012863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.012897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.012929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.012961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.012978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.012993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.743 [2024-07-25 20:04:08.013390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.743 [2024-07-25 20:04:08.013711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.743 [2024-07-25 20:04:08.013727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.013983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.013999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.014015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.014047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.014092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.014142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.744 [2024-07-25 20:04:08.014172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264a9a0 is same with the state(5) to be set 00:33:58.744 [2024-07-25 20:04:08.014209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.744 [2024-07-25 20:04:08.014220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.744 [2024-07-25 20:04:08.014232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:33:58.744 [2024-07-25 20:04:08.014246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014311] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x264a9a0 was disconnected and freed. reset controller. 00:33:58.744 [2024-07-25 20:04:08.014400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.744 [2024-07-25 20:04:08.014424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.744 [2024-07-25 20:04:08.014455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.744 [2024-07-25 20:04:08.014486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.744 [2024-07-25 20:04:08.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.744 [2024-07-25 20:04:08.014530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.744 [2024-07-25 20:04:08.018372] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.744 [2024-07-25 20:04:08.018424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.744 [2024-07-25 20:04:08.019131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.744 [2024-07-25 20:04:08.019160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.744 [2024-07-25 20:04:08.019176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.744 [2024-07-25 20:04:08.019421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.744 [2024-07-25 20:04:08.019665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.744 [2024-07-25 20:04:08.019689] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.744 [2024-07-25 20:04:08.019708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.744 [2024-07-25 20:04:08.023317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.744 [2024-07-25 20:04:08.032656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.744 [2024-07-25 20:04:08.033075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.744 [2024-07-25 20:04:08.033108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.744 [2024-07-25 20:04:08.033126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.744 [2024-07-25 20:04:08.033365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.744 [2024-07-25 20:04:08.033608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.744 [2024-07-25 20:04:08.033632] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.744 [2024-07-25 20:04:08.033648] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.744 [2024-07-25 20:04:08.037236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.744 [2024-07-25 20:04:08.046538] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.744 [2024-07-25 20:04:08.046959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.744 [2024-07-25 20:04:08.047003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.744 [2024-07-25 20:04:08.047020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.744 [2024-07-25 20:04:08.047295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.744 [2024-07-25 20:04:08.047546] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.047571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.047588] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.051172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.060323] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.060724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.060755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.060774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.061043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.061279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.061300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.061314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.064439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.073652] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.073995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.074022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.074066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.074284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.074521] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.074541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.074553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.077623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.086971] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.087341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.087369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.087400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.087621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.087835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.087855] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.087868] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.090886] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.100255] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.100704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.100731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.100746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.100982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.101237] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.101260] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.101274] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.104349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.113578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.114003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.114028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.114066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.114297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.114534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.114561] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.114574] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.117556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.126847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.127216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.127258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.127274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.127529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.127728] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.127747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.127761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.745 [2024-07-25 20:04:08.130788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.745 [2024-07-25 20:04:08.140203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.745 [2024-07-25 20:04:08.140627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.745 [2024-07-25 20:04:08.140653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.745 [2024-07-25 20:04:08.140669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.745 [2024-07-25 20:04:08.140938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.745 [2024-07-25 20:04:08.141171] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.745 [2024-07-25 20:04:08.141193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.745 [2024-07-25 20:04:08.141206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.746 [2024-07-25 20:04:08.144191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.746 [2024-07-25 20:04:08.153502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.746 [2024-07-25 20:04:08.153952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.746 [2024-07-25 20:04:08.153979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:58.746 [2024-07-25 20:04:08.153995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:58.746 [2024-07-25 20:04:08.154233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:58.746 [2024-07-25 20:04:08.154473] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.746 [2024-07-25 20:04:08.154493] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.746 [2024-07-25 20:04:08.154505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.746 [2024-07-25 20:04:08.157485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.006 [2024-07-25 20:04:08.167301] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.006 [2024-07-25 20:04:08.167761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-25 20:04:08.167789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.006 [2024-07-25 20:04:08.167805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.006 [2024-07-25 20:04:08.168019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.006 [2024-07-25 20:04:08.168278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.006 [2024-07-25 20:04:08.168301] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.006 [2024-07-25 20:04:08.168315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.006 [2024-07-25 20:04:08.171335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.006 [2024-07-25 20:04:08.180843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.006 [2024-07-25 20:04:08.181274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-25 20:04:08.181302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.006 [2024-07-25 20:04:08.181318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.006 [2024-07-25 20:04:08.181570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.006 [2024-07-25 20:04:08.181769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.006 [2024-07-25 20:04:08.181789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.006 [2024-07-25 20:04:08.181801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.006 [2024-07-25 20:04:08.184828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.006 [2024-07-25 20:04:08.194103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.194475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.194502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.194518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.194748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.194963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.194982] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.194995] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.198019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.207334] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.207797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.207825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.207841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.208097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.208325] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.208371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.208385] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.211381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.220529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.220909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.220935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.220950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.221188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.221421] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.221441] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.221455] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.224437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.233764] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.234116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.234145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.234162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.234377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.234591] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.234611] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.234624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.237659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.247005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.247471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.247498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.247530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.247772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.247986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.248005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.248023] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.251066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.260222] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.260673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.260701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.260717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.260947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.261190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.261213] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.261227] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.264668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.274095] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.274523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.274550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.007 [2024-07-25 20:04:08.274566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.007 [2024-07-25 20:04:08.274781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.007 [2024-07-25 20:04:08.274999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.007 [2024-07-25 20:04:08.275021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.007 [2024-07-25 20:04:08.275035] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.007 [2024-07-25 20:04:08.278383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.007 [2024-07-25 20:04:08.287545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.007 [2024-07-25 20:04:08.287936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-25 20:04:08.287964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.287980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.288217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.288440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.288460] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.288473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.291576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.300842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.301274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.301301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.301316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.301568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.301767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.301786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.301799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.304790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.314051] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.314471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.314512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.314528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.314796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.314996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.315016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.315028] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.318037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.327322] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.327695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.327723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.327739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.327981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.328227] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.328249] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.328263] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.331272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.340666] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.341117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.341145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.341161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.341396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.341612] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.341632] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.341645] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.344625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.354009] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.354469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.354496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.354511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.354732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.354946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.354966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.354979] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.357959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.367393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.367781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.367809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.367825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.368078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.368284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.008 [2024-07-25 20:04:08.368304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.008 [2024-07-25 20:04:08.368318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.008 [2024-07-25 20:04:08.371350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.008 [2024-07-25 20:04:08.380654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.008 [2024-07-25 20:04:08.381040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-25 20:04:08.381075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.008 [2024-07-25 20:04:08.381092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.008 [2024-07-25 20:04:08.381322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.008 [2024-07-25 20:04:08.381554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.009 [2024-07-25 20:04:08.381573] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.009 [2024-07-25 20:04:08.381590] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.009 [2024-07-25 20:04:08.384636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.009 [2024-07-25 20:04:08.393950] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.009 [2024-07-25 20:04:08.394301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-25 20:04:08.394328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.009 [2024-07-25 20:04:08.394344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.009 [2024-07-25 20:04:08.394564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.009 [2024-07-25 20:04:08.394780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.009 [2024-07-25 20:04:08.394800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.009 [2024-07-25 20:04:08.394813] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.009 [2024-07-25 20:04:08.397847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.009 [2024-07-25 20:04:08.407156] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.009 [2024-07-25 20:04:08.407630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-25 20:04:08.407657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.009 [2024-07-25 20:04:08.407673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.009 [2024-07-25 20:04:08.407915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.009 [2024-07-25 20:04:08.408175] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.009 [2024-07-25 20:04:08.408197] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.009 [2024-07-25 20:04:08.408211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.009 [2024-07-25 20:04:08.411213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.009 [2024-07-25 20:04:08.420425] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.009 [2024-07-25 20:04:08.420845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-25 20:04:08.420872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.009 [2024-07-25 20:04:08.420888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.009 [2024-07-25 20:04:08.421140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.009 [2024-07-25 20:04:08.421346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.009 [2024-07-25 20:04:08.421367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.009 [2024-07-25 20:04:08.421380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.009 [2024-07-25 20:04:08.424391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.009 [2024-07-25 20:04:08.434082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.009 [2024-07-25 20:04:08.434492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-25 20:04:08.434526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.009 [2024-07-25 20:04:08.434542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.009 [2024-07-25 20:04:08.434758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.009 [2024-07-25 20:04:08.434976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.009 [2024-07-25 20:04:08.434998] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.269 [2024-07-25 20:04:08.435012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.269 [2024-07-25 20:04:08.438209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.269 [2024-07-25 20:04:08.447473] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.269 [2024-07-25 20:04:08.447821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.269 [2024-07-25 20:04:08.447848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.269 [2024-07-25 20:04:08.447863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.269 [2024-07-25 20:04:08.448107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.448324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.448346] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.448359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.451378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.460754] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.461159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.461189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.461205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.461435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.461650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.461669] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.461682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.464692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.474067] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.474418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.474446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.474462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.474694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.474914] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.474934] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.474947] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.477892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.487397] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.487715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.487756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.487772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.487994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.488228] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.488250] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.488264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.491252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.500637] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.500984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.501010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.501024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.501268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.501486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.501506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.501519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.504526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.513997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.514416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.514444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.514460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.514686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.514921] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.514942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.514955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.518425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.527448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.527834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.527861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.527877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.528129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.528364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.270 [2024-07-25 20:04:08.528385] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.270 [2024-07-25 20:04:08.528414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.270 [2024-07-25 20:04:08.531490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.270 [2024-07-25 20:04:08.540816] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.270 [2024-07-25 20:04:08.541189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.270 [2024-07-25 20:04:08.541216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.270 [2024-07-25 20:04:08.541232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.270 [2024-07-25 20:04:08.541461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.270 [2024-07-25 20:04:08.541677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.541696] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.541709] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.544706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.554190] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.554659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.554687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.554703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.554948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.555214] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.555237] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.555251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.558254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.567397] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.567843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.567870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.567891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.568146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.568388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.568409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.568422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.571423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.580733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.581121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.581149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.581165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.581409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.581632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.581652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.581665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.584673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.593942] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.594334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.594362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.594379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.594621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.594820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.594840] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.594852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.597874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.607179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.607583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.607609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.607625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.607858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.608083] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.608123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.608137] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.611142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.620440] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.620852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.620894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.620910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.621149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.621361] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.621382] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.621411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.624390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.633700] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.271 [2024-07-25 20:04:08.634087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.271 [2024-07-25 20:04:08.634114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.271 [2024-07-25 20:04:08.634130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.271 [2024-07-25 20:04:08.634359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.271 [2024-07-25 20:04:08.634575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.271 [2024-07-25 20:04:08.634595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.271 [2024-07-25 20:04:08.634608] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.271 [2024-07-25 20:04:08.637631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.271 [2024-07-25 20:04:08.646903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.272 [2024-07-25 20:04:08.647332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.272 [2024-07-25 20:04:08.647359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.272 [2024-07-25 20:04:08.647374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.272 [2024-07-25 20:04:08.647627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.272 [2024-07-25 20:04:08.647825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.272 [2024-07-25 20:04:08.647845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.272 [2024-07-25 20:04:08.647857] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.272 [2024-07-25 20:04:08.650849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.272 [2024-07-25 20:04:08.660206] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.272 [2024-07-25 20:04:08.660633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.272 [2024-07-25 20:04:08.660676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.272 [2024-07-25 20:04:08.660692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.272 [2024-07-25 20:04:08.660936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.272 [2024-07-25 20:04:08.661181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.272 [2024-07-25 20:04:08.661203] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.272 [2024-07-25 20:04:08.661216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.272 [2024-07-25 20:04:08.664203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.272 [2024-07-25 20:04:08.673524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.272 [2024-07-25 20:04:08.673882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.272 [2024-07-25 20:04:08.673910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.272 [2024-07-25 20:04:08.673925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.272 [2024-07-25 20:04:08.674164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.272 [2024-07-25 20:04:08.674397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.272 [2024-07-25 20:04:08.674418] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.272 [2024-07-25 20:04:08.674446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.272 [2024-07-25 20:04:08.677430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.272 [2024-07-25 20:04:08.686776] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.272 [2024-07-25 20:04:08.687217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.272 [2024-07-25 20:04:08.687245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.272 [2024-07-25 20:04:08.687261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.272 [2024-07-25 20:04:08.687507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.272 [2024-07-25 20:04:08.687706] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.272 [2024-07-25 20:04:08.687726] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.272 [2024-07-25 20:04:08.687738] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.272 [2024-07-25 20:04:08.690727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.532 [2024-07-25 20:04:08.700283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.532 [2024-07-25 20:04:08.700739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.700780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.700796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.701042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.701271] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.701292] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.701305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.704494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.713629] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.714032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.714081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.714098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.714328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.714543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.714563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.714576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.717580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.726914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.727319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.727347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.727363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.727592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.727814] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.727835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.727848] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.730920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.740324] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.740710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.740738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.740754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.740983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.741235] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.741257] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.741276] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.744366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.753701] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.754140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.754182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.754199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.754443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.754657] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.754677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.754690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.757709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.767001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.767402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.767430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.767446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.767675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.767908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.767930] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.767945] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.771377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.780416] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.780863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.780891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.780907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.781130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.781365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.781387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.781415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.784470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.793648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.794043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.533 [2024-07-25 20:04:08.794077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.533 [2024-07-25 20:04:08.794095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.533 [2024-07-25 20:04:08.794310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.533 [2024-07-25 20:04:08.794546] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.533 [2024-07-25 20:04:08.794566] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.533 [2024-07-25 20:04:08.794579] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.533 [2024-07-25 20:04:08.797601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.533 [2024-07-25 20:04:08.806969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.533 [2024-07-25 20:04:08.807369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.807397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.807413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.807641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.807863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.807884] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.807896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.810974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.820415] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.820793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.820820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.820836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.821088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.821317] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.821338] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.821352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.824436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.833755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.834142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.834170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.834185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.834431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.834646] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.834666] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.834679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.837699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.847019] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.847496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.847523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.847555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.847810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.848009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.848029] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.848042] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.851033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.860361] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.860825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.860853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.860869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.861123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.861351] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.861387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.861400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.864396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.873694] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.874055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.874088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.874104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.874346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.874561] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.874581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.874594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.877648] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.886938] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.887409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.887438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.887455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.887698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.534 [2024-07-25 20:04:08.887898] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.534 [2024-07-25 20:04:08.887917] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.534 [2024-07-25 20:04:08.887930] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.534 [2024-07-25 20:04:08.890936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.534 [2024-07-25 20:04:08.900229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.534 [2024-07-25 20:04:08.900638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.534 [2024-07-25 20:04:08.900666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.534 [2024-07-25 20:04:08.900682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.534 [2024-07-25 20:04:08.900923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.535 [2024-07-25 20:04:08.901184] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.535 [2024-07-25 20:04:08.901206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.535 [2024-07-25 20:04:08.901220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.535 [2024-07-25 20:04:08.904219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.535 [2024-07-25 20:04:08.913509] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.535 [2024-07-25 20:04:08.913894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.535 [2024-07-25 20:04:08.913922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.535 [2024-07-25 20:04:08.913937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.535 [2024-07-25 20:04:08.914174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.535 [2024-07-25 20:04:08.914416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.535 [2024-07-25 20:04:08.914435] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.535 [2024-07-25 20:04:08.914449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.535 [2024-07-25 20:04:08.917429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.535 [2024-07-25 20:04:08.926710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.535 [2024-07-25 20:04:08.927098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.535 [2024-07-25 20:04:08.927130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.535 [2024-07-25 20:04:08.927146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.535 [2024-07-25 20:04:08.927375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.535 [2024-07-25 20:04:08.927590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.535 [2024-07-25 20:04:08.927610] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.535 [2024-07-25 20:04:08.927623] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.535 [2024-07-25 20:04:08.930644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.535 [2024-07-25 20:04:08.939932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.535 [2024-07-25 20:04:08.940386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.535 [2024-07-25 20:04:08.940413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.535 [2024-07-25 20:04:08.940444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.535 [2024-07-25 20:04:08.940688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.535 [2024-07-25 20:04:08.940887] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.535 [2024-07-25 20:04:08.940907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.535 [2024-07-25 20:04:08.940920] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.535 [2024-07-25 20:04:08.943946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.535 [2024-07-25 20:04:08.953273] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.535 [2024-07-25 20:04:08.953629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.535 [2024-07-25 20:04:08.953656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.535 [2024-07-25 20:04:08.953671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.535 [2024-07-25 20:04:08.953878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.535 [2024-07-25 20:04:08.954157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.535 [2024-07-25 20:04:08.954178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.535 [2024-07-25 20:04:08.954192] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.535 [2024-07-25 20:04:08.957333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.795 [2024-07-25 20:04:08.966639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.795 [2024-07-25 20:04:08.967022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.795 [2024-07-25 20:04:08.967049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.795 [2024-07-25 20:04:08.967073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.795 [2024-07-25 20:04:08.967289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.795 [2024-07-25 20:04:08.967529] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.795 [2024-07-25 20:04:08.967551] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.795 [2024-07-25 20:04:08.967565] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.795 [2024-07-25 20:04:08.970673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.795 [2024-07-25 20:04:08.979808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.795 [2024-07-25 20:04:08.980154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.795 [2024-07-25 20:04:08.980181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.795 [2024-07-25 20:04:08.980196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.795 [2024-07-25 20:04:08.980419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.795 [2024-07-25 20:04:08.980633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.795 [2024-07-25 20:04:08.980653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.795 [2024-07-25 20:04:08.980666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.795 [2024-07-25 20:04:08.983678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.795 [2024-07-25 20:04:08.993161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.795 [2024-07-25 20:04:08.993576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.795 [2024-07-25 20:04:08.993603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.795 [2024-07-25 20:04:08.993619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.795 [2024-07-25 20:04:08.993865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.795 [2024-07-25 20:04:08.994125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.795 [2024-07-25 20:04:08.994147] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.795 [2024-07-25 20:04:08.994161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.795 [2024-07-25 20:04:08.997168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.795 [2024-07-25 20:04:09.006466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.795 [2024-07-25 20:04:09.006852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.795 [2024-07-25 20:04:09.006880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.795 [2024-07-25 20:04:09.006896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.795 [2024-07-25 20:04:09.007153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.795 [2024-07-25 20:04:09.007396] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.795 [2024-07-25 20:04:09.007416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.795 [2024-07-25 20:04:09.007429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.795 [2024-07-25 20:04:09.010425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.795 [2024-07-25 20:04:09.019751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.795 [2024-07-25 20:04:09.020182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.795 [2024-07-25 20:04:09.020210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.795 [2024-07-25 20:04:09.020227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.795 [2024-07-25 20:04:09.020456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.795 [2024-07-25 20:04:09.020678] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.795 [2024-07-25 20:04:09.020699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.795 [2024-07-25 20:04:09.020712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.795 [2024-07-25 20:04:09.024157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.795 [2024-07-25 20:04:09.033294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.795 [2024-07-25 20:04:09.033751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.033791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.033808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.034050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.034287] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.034308] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.034322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.037458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.046570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.046971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.046997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.047012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.047266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.047501] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.047521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.047534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.050512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.059804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.060231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.060259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.060280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.060508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.060723] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.060743] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.060756] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.063768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.073096] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.073537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.073565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.073581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.073827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.074055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.074084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.074097] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.077116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.086425] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.086832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.086874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.086890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.087128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.087341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.087376] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.087389] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.090385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.099669] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.100129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.100157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.100173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.100415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.100630] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.100655] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.100668] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.103656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.112945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.113330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.113372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.113388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.113646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.113844] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.796 [2024-07-25 20:04:09.113864] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.796 [2024-07-25 20:04:09.113876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.796 [2024-07-25 20:04:09.116892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.796 [2024-07-25 20:04:09.126240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.796 [2024-07-25 20:04:09.126675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.796 [2024-07-25 20:04:09.126700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.796 [2024-07-25 20:04:09.126731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.796 [2024-07-25 20:04:09.126967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.796 [2024-07-25 20:04:09.127195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.127216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.127229] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.130247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.139563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.139919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.139946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.139961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.140197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.140443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.140463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.140476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.143456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.152900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.153344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.153372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.153387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.153619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.153835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.153855] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.153867] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.156883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.166231] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.166650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.166690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.166707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.166935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.167196] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.167218] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.167232] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.170234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.179539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.179890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.179917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.179933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.180156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.180398] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.180433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.180446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.183429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.192721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.193127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.193154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.193170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.193403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.193617] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.193637] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.193650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.196668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.205975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.206386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.206428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.206443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.206697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.206895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.206915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.206928] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.209954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.797 [2024-07-25 20:04:09.219319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.797 [2024-07-25 20:04:09.219641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.797 [2024-07-25 20:04:09.219668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:33:59.797 [2024-07-25 20:04:09.219684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:33:59.797 [2024-07-25 20:04:09.219898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:33:59.797 [2024-07-25 20:04:09.220158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.797 [2024-07-25 20:04:09.220180] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.797 [2024-07-25 20:04:09.220194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.797 [2024-07-25 20:04:09.223524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.058 [2024-07-25 20:04:09.232815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.058 [2024-07-25 20:04:09.233235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.058 [2024-07-25 20:04:09.233262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.058 [2024-07-25 20:04:09.233278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.058 [2024-07-25 20:04:09.233507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.058 [2024-07-25 20:04:09.233722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.058 [2024-07-25 20:04:09.233742] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.058 [2024-07-25 20:04:09.233760] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.058 [2024-07-25 20:04:09.236763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.058 [2024-07-25 20:04:09.246082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.058 [2024-07-25 20:04:09.246532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.058 [2024-07-25 20:04:09.246559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.058 [2024-07-25 20:04:09.246575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.058 [2024-07-25 20:04:09.246818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.058 [2024-07-25 20:04:09.247033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.058 [2024-07-25 20:04:09.247052] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.058 [2024-07-25 20:04:09.247088] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.058 [2024-07-25 20:04:09.250101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.058 [2024-07-25 20:04:09.259380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.058 [2024-07-25 20:04:09.259843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.058 [2024-07-25 20:04:09.259869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.058 [2024-07-25 20:04:09.259884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.058 [2024-07-25 20:04:09.260158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.058 [2024-07-25 20:04:09.260392] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.058 [2024-07-25 20:04:09.260413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.058 [2024-07-25 20:04:09.260426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.058 [2024-07-25 20:04:09.263420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.058 [2024-07-25 20:04:09.273032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.058 [2024-07-25 20:04:09.273453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.058 [2024-07-25 20:04:09.273483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.058 [2024-07-25 20:04:09.273501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.058 [2024-07-25 20:04:09.273739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.058 [2024-07-25 20:04:09.273982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.058 [2024-07-25 20:04:09.274006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.058 [2024-07-25 20:04:09.274021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.058 [2024-07-25 20:04:09.277600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.058 [2024-07-25 20:04:09.286890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.058 [2024-07-25 20:04:09.287288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.058 [2024-07-25 20:04:09.287318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.058 [2024-07-25 20:04:09.287336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.058 [2024-07-25 20:04:09.287574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.058 [2024-07-25 20:04:09.287816] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.058 [2024-07-25 20:04:09.287839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.058 [2024-07-25 20:04:09.287855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.058 [2024-07-25 20:04:09.291438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.058 [2024-07-25 20:04:09.300930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.058 [2024-07-25 20:04:09.301350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.058 [2024-07-25 20:04:09.301381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.058 [2024-07-25 20:04:09.301398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.058 [2024-07-25 20:04:09.301636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.058 [2024-07-25 20:04:09.301879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.058 [2024-07-25 20:04:09.301902] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.058 [2024-07-25 20:04:09.301918] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.058 [2024-07-25 20:04:09.305499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.314778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.315186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.315217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.315235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.315473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.315716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.315740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.315755] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.319336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.328619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.329030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.329068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.329087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.329326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.329574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.329599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.329614] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.333193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.342473] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.342888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.342917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.342935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.343183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.343426] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.343450] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.343466] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.347038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.356335] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.356713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.356743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.356761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.356999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.357252] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.357277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.357293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.360866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.370370] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.370749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.370779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.370797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.371035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.371287] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.371311] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.371327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.374900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.384410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.384788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.384819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.384836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.385087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.385331] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.385355] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.385370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.388940] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.398437] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.398838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.398868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.398885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.399136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.399379] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.399403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.399418] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.059 [2024-07-25 20:04:09.403017] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.059 [2024-07-25 20:04:09.412311] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.059 [2024-07-25 20:04:09.412811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.059 [2024-07-25 20:04:09.412841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.059 [2024-07-25 20:04:09.412858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.059 [2024-07-25 20:04:09.413109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.059 [2024-07-25 20:04:09.413352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.059 [2024-07-25 20:04:09.413376] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.059 [2024-07-25 20:04:09.413392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.060 [2024-07-25 20:04:09.416964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.060 [2024-07-25 20:04:09.426271] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.060 [2024-07-25 20:04:09.426630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.060 [2024-07-25 20:04:09.426660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.060 [2024-07-25 20:04:09.426683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.060 [2024-07-25 20:04:09.426922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.060 [2024-07-25 20:04:09.427176] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.060 [2024-07-25 20:04:09.427201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.060 [2024-07-25 20:04:09.427216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.060 [2024-07-25 20:04:09.430787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.060 [2024-07-25 20:04:09.440289] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.060 [2024-07-25 20:04:09.440794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.060 [2024-07-25 20:04:09.440844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.060 [2024-07-25 20:04:09.440861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.060 [2024-07-25 20:04:09.441110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.060 [2024-07-25 20:04:09.441354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.060 [2024-07-25 20:04:09.441378] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.060 [2024-07-25 20:04:09.441393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.060 [2024-07-25 20:04:09.444963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.060 [2024-07-25 20:04:09.454259] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.060 [2024-07-25 20:04:09.454676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.060 [2024-07-25 20:04:09.454707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.060 [2024-07-25 20:04:09.454724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.060 [2024-07-25 20:04:09.454962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.060 [2024-07-25 20:04:09.455216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.060 [2024-07-25 20:04:09.455240] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.060 [2024-07-25 20:04:09.455256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.060 [2024-07-25 20:04:09.458825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.060 [2024-07-25 20:04:09.468117] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.060 [2024-07-25 20:04:09.468579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.060 [2024-07-25 20:04:09.468609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.060 [2024-07-25 20:04:09.468626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.060 [2024-07-25 20:04:09.468864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.060 [2024-07-25 20:04:09.469126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.060 [2024-07-25 20:04:09.469150] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.060 [2024-07-25 20:04:09.469166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.060 [2024-07-25 20:04:09.472735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.060 [2024-07-25 20:04:09.482024] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.060 [2024-07-25 20:04:09.482534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.060 [2024-07-25 20:04:09.482586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.060 [2024-07-25 20:04:09.482604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.060 [2024-07-25 20:04:09.482842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.060 [2024-07-25 20:04:09.483098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.060 [2024-07-25 20:04:09.483122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.060 [2024-07-25 20:04:09.483138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.319 [2024-07-25 20:04:09.486708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.319 [2024-07-25 20:04:09.496012] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.319 [2024-07-25 20:04:09.496431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.319 [2024-07-25 20:04:09.496461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.319 [2024-07-25 20:04:09.496479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.319 [2024-07-25 20:04:09.496718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.319 [2024-07-25 20:04:09.496960] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.319 [2024-07-25 20:04:09.496984] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.319 [2024-07-25 20:04:09.496999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.500584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.509886] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.510301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.510332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.510349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.510587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.510829] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.510854] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.510870] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.514460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.523768] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.524124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.524155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.524173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.524416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.524659] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.524684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.524699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.528308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.537620] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.537996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.538027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.538045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.538293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.538538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.538562] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.538577] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.542162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.551672] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.552088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.552119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.552137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.552375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.552617] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.552641] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.552657] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.556252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.565552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.565953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.565984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.566007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.566255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.566498] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.566522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.566538] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.570126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.579432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.579817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.579847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.579865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.580113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.580357] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.580381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.580397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.583974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.593291] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.593663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.320 [2024-07-25 20:04:09.593693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.320 [2024-07-25 20:04:09.593710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.320 [2024-07-25 20:04:09.593948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.320 [2024-07-25 20:04:09.594203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.320 [2024-07-25 20:04:09.594228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.320 [2024-07-25 20:04:09.594244] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.320 [2024-07-25 20:04:09.597820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.320 [2024-07-25 20:04:09.607137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.320 [2024-07-25 20:04:09.607595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.607625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.607642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.607880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.608138] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.608168] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.608184] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.611760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.621075] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.621585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.621641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.621659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.621897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.622151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.622176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.622191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.625770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.635081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.635479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.635510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.635527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.635765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.636009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.636033] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.636048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.639639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.648936] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.649328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.649358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.649376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.649614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.649857] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.649881] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.649896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.653487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.662779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.663202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.663233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.663251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.663488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.663731] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.663755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.663770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.667355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.676647] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.677043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.677081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.677100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.677338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.677581] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.677605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.677620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.681207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.690501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.690873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.690903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.690921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.321 [2024-07-25 20:04:09.691171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.321 [2024-07-25 20:04:09.691415] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.321 [2024-07-25 20:04:09.691439] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.321 [2024-07-25 20:04:09.691454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.321 [2024-07-25 20:04:09.695022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.321 [2024-07-25 20:04:09.704522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.321 [2024-07-25 20:04:09.704893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.321 [2024-07-25 20:04:09.704923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.321 [2024-07-25 20:04:09.704940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.322 [2024-07-25 20:04:09.705194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.322 [2024-07-25 20:04:09.705438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.322 [2024-07-25 20:04:09.705463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.322 [2024-07-25 20:04:09.705478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.322 [2024-07-25 20:04:09.709048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.322 [2024-07-25 20:04:09.718547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.322 [2024-07-25 20:04:09.718991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.322 [2024-07-25 20:04:09.719041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.322 [2024-07-25 20:04:09.719068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.322 [2024-07-25 20:04:09.719308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.322 [2024-07-25 20:04:09.719551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.322 [2024-07-25 20:04:09.719575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.322 [2024-07-25 20:04:09.719590] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.322 [2024-07-25 20:04:09.723168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.322 [2024-07-25 20:04:09.732457] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.322 [2024-07-25 20:04:09.732874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.322 [2024-07-25 20:04:09.732905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.322 [2024-07-25 20:04:09.732923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.322 [2024-07-25 20:04:09.733173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.322 [2024-07-25 20:04:09.733416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.322 [2024-07-25 20:04:09.733440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.322 [2024-07-25 20:04:09.733456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.322 [2024-07-25 20:04:09.737029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.322 [2024-07-25 20:04:09.746329] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.322 [2024-07-25 20:04:09.746736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.322 [2024-07-25 20:04:09.746766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.322 [2024-07-25 20:04:09.746784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.322 [2024-07-25 20:04:09.747022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.322 [2024-07-25 20:04:09.747275] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.322 [2024-07-25 20:04:09.747299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.322 [2024-07-25 20:04:09.747320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.582 [2024-07-25 20:04:09.750891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.582 [2024-07-25 20:04:09.760209] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.582 [2024-07-25 20:04:09.760591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.582 [2024-07-25 20:04:09.760621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.582 [2024-07-25 20:04:09.760639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.582 [2024-07-25 20:04:09.760876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.582 [2024-07-25 20:04:09.761132] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.582 [2024-07-25 20:04:09.761157] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.582 [2024-07-25 20:04:09.761173] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.582 [2024-07-25 20:04:09.764745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.582 [2024-07-25 20:04:09.774052] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.582 [2024-07-25 20:04:09.774499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.582 [2024-07-25 20:04:09.774550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.582 [2024-07-25 20:04:09.774568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.582 [2024-07-25 20:04:09.774806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.582 [2024-07-25 20:04:09.775048] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.582 [2024-07-25 20:04:09.775082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.582 [2024-07-25 20:04:09.775099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.582 [2024-07-25 20:04:09.778673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.582 [2024-07-25 20:04:09.787966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.582 [2024-07-25 20:04:09.788357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.582 [2024-07-25 20:04:09.788387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.582 [2024-07-25 20:04:09.788404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.582 [2024-07-25 20:04:09.788641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.582 [2024-07-25 20:04:09.788884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.582 [2024-07-25 20:04:09.788908] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.582 [2024-07-25 20:04:09.788924] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.582 [2024-07-25 20:04:09.792511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.582 [2024-07-25 20:04:09.801805] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.582 [2024-07-25 20:04:09.802231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.582 [2024-07-25 20:04:09.802268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.582 [2024-07-25 20:04:09.802287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.582 [2024-07-25 20:04:09.802524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.582 [2024-07-25 20:04:09.802767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.582 [2024-07-25 20:04:09.802791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.582 [2024-07-25 20:04:09.802806] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.582 [2024-07-25 20:04:09.806427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.582 [2024-07-25 20:04:09.815721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.582 [2024-07-25 20:04:09.816123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.582 [2024-07-25 20:04:09.816154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.582 [2024-07-25 20:04:09.816172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.582 [2024-07-25 20:04:09.816410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.816653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.816677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.816692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.820275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.829583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.829960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.829990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.830008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.830256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.830499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.830524] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.830539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.834117] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.843605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.843978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.844008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.844026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.844273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.844521] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.844546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.844561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.848140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.857631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.858029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.858067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.858086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.858324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.858567] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.858591] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.858606] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.862185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.871666] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.872053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.872094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.872113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.872351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.872594] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.872618] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.872634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.876209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.885708] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.886071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.886103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.886121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.886360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.886603] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.886627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.886643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.890229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.899746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.900131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.900162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.900180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.900419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.900661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.900685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.900701] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.904278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.913777] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.914159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.914190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.914208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.914447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.914690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.914714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.914729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.918308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.927795] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.928203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.583 [2024-07-25 20:04:09.928234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.583 [2024-07-25 20:04:09.928252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.583 [2024-07-25 20:04:09.928490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.583 [2024-07-25 20:04:09.928733] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.583 [2024-07-25 20:04:09.928757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.583 [2024-07-25 20:04:09.928773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.583 [2024-07-25 20:04:09.932351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.583 [2024-07-25 20:04:09.941841] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.583 [2024-07-25 20:04:09.942260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.584 [2024-07-25 20:04:09.942292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.584 [2024-07-25 20:04:09.942315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.584 [2024-07-25 20:04:09.942555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.584 [2024-07-25 20:04:09.942798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.584 [2024-07-25 20:04:09.942822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.584 [2024-07-25 20:04:09.942837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.584 [2024-07-25 20:04:09.946415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.584 [2024-07-25 20:04:09.955712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.584 [2024-07-25 20:04:09.956091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.584 [2024-07-25 20:04:09.956121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.584 [2024-07-25 20:04:09.956139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.584 [2024-07-25 20:04:09.956377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.584 [2024-07-25 20:04:09.956619] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.584 [2024-07-25 20:04:09.956643] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.584 [2024-07-25 20:04:09.956659] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.584 [2024-07-25 20:04:09.960242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.584 [2024-07-25 20:04:09.969734] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.584 [2024-07-25 20:04:09.970148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.584 [2024-07-25 20:04:09.970179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.584 [2024-07-25 20:04:09.970197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.584 [2024-07-25 20:04:09.970435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.584 [2024-07-25 20:04:09.970678] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.584 [2024-07-25 20:04:09.970702] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.584 [2024-07-25 20:04:09.970717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.584 [2024-07-25 20:04:09.974297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.584 [2024-07-25 20:04:09.983599] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.584 [2024-07-25 20:04:09.984010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.584 [2024-07-25 20:04:09.984040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.584 [2024-07-25 20:04:09.984065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.584 [2024-07-25 20:04:09.984306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.584 [2024-07-25 20:04:09.984548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.584 [2024-07-25 20:04:09.984577] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.584 [2024-07-25 20:04:09.984594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.584 [2024-07-25 20:04:09.988167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.584 [2024-07-25 20:04:09.997480] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.584 [2024-07-25 20:04:09.997851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.584 [2024-07-25 20:04:09.997881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.584 [2024-07-25 20:04:09.997898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.584 [2024-07-25 20:04:09.998148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.584 [2024-07-25 20:04:09.998399] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.584 [2024-07-25 20:04:09.998423] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.584 [2024-07-25 20:04:09.998439] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.584 [2024-07-25 20:04:10.002007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.844 [2024-07-25 20:04:10.011465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.844 [2024-07-25 20:04:10.011892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.844 [2024-07-25 20:04:10.011924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.844 [2024-07-25 20:04:10.011943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.844 [2024-07-25 20:04:10.012194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.844 [2024-07-25 20:04:10.012438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.844 [2024-07-25 20:04:10.012463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.844 [2024-07-25 20:04:10.012480] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.844 [2024-07-25 20:04:10.016072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.844 [2024-07-25 20:04:10.025407] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.844 [2024-07-25 20:04:10.025805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.844 [2024-07-25 20:04:10.025839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.844 [2024-07-25 20:04:10.025857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.844 [2024-07-25 20:04:10.026111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.844 [2024-07-25 20:04:10.026358] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.844 [2024-07-25 20:04:10.026382] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.844 [2024-07-25 20:04:10.026399] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.844 [2024-07-25 20:04:10.029980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.844 [2024-07-25 20:04:10.039314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.844 [2024-07-25 20:04:10.039740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.844 [2024-07-25 20:04:10.039792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.844 [2024-07-25 20:04:10.039810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.844 [2024-07-25 20:04:10.040049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.844 [2024-07-25 20:04:10.040302] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.844 [2024-07-25 20:04:10.040326] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.844 [2024-07-25 20:04:10.040342] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.844 [2024-07-25 20:04:10.043920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.844 [2024-07-25 20:04:10.053283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.844 [2024-07-25 20:04:10.053697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.844 [2024-07-25 20:04:10.053728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.844 [2024-07-25 20:04:10.053746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.844 [2024-07-25 20:04:10.053985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.844 [2024-07-25 20:04:10.054238] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.844 [2024-07-25 20:04:10.054262] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.844 [2024-07-25 20:04:10.054278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.844 [2024-07-25 20:04:10.057857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.844 [2024-07-25 20:04:10.067166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.067624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.067655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.067672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.067911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.068173] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.068199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.068216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.071786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.081087] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.081570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.081600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.081625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.081864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.082125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.082150] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.082167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.085738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.095033] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.095462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.095513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.095531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.095769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.096012] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.096035] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.096051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.099646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.108945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.109356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.109423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.109441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.109680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.109922] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.109946] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.109962] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.113551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.122850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.123271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.123303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.123321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.123559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.123802] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.123826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.123848] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.127436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.136756] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.137301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.137355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.137373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.137611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.137854] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.137878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.137893] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.141489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.150789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.151200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.151231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.151249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.151487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.151730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.151754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.151770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.155365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.164665] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.165073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.165104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.165122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.165361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.845 [2024-07-25 20:04:10.165605] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.845 [2024-07-25 20:04:10.165629] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.845 [2024-07-25 20:04:10.165644] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.845 [2024-07-25 20:04:10.169232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.845 [2024-07-25 20:04:10.178528] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.845 [2024-07-25 20:04:10.178940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.845 [2024-07-25 20:04:10.178971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.845 [2024-07-25 20:04:10.178988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.845 [2024-07-25 20:04:10.179245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.179490] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.179514] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.179530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.183118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.846 [2024-07-25 20:04:10.192408] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.846 [2024-07-25 20:04:10.192811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.846 [2024-07-25 20:04:10.192842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.846 [2024-07-25 20:04:10.192859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.846 [2024-07-25 20:04:10.193118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.193362] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.193387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.193402] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.196970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.846 [2024-07-25 20:04:10.206286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.846 [2024-07-25 20:04:10.206664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.846 [2024-07-25 20:04:10.206695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.846 [2024-07-25 20:04:10.206712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.846 [2024-07-25 20:04:10.206950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.207212] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.207238] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.207254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.210833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.846 [2024-07-25 20:04:10.220133] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.846 [2024-07-25 20:04:10.220543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.846 [2024-07-25 20:04:10.220574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.846 [2024-07-25 20:04:10.220592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.846 [2024-07-25 20:04:10.220836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.221096] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.221121] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.221137] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.224710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.846 [2024-07-25 20:04:10.234004] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.846 [2024-07-25 20:04:10.234501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.846 [2024-07-25 20:04:10.234532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.846 [2024-07-25 20:04:10.234549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.846 [2024-07-25 20:04:10.234787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.235030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.235054] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.235091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.238670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.846 [2024-07-25 20:04:10.247966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.846 [2024-07-25 20:04:10.248356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.846 [2024-07-25 20:04:10.248387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.846 [2024-07-25 20:04:10.248404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.846 [2024-07-25 20:04:10.248642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.248886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.248910] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.248925] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.252589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.846 [2024-07-25 20:04:10.261891] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.846 [2024-07-25 20:04:10.262306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.846 [2024-07-25 20:04:10.262337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:00.846 [2024-07-25 20:04:10.262355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:00.846 [2024-07-25 20:04:10.262593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:00.846 [2024-07-25 20:04:10.262835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.846 [2024-07-25 20:04:10.262859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.846 [2024-07-25 20:04:10.262880] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.846 [2024-07-25 20:04:10.266473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.106 [2024-07-25 20:04:10.275790] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.106 [2024-07-25 20:04:10.276203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.106 [2024-07-25 20:04:10.276234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.106 [2024-07-25 20:04:10.276252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.106 [2024-07-25 20:04:10.276490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.106 [2024-07-25 20:04:10.276732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.106 [2024-07-25 20:04:10.276756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.106 [2024-07-25 20:04:10.276772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.106 [2024-07-25 20:04:10.280366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.106 [2024-07-25 20:04:10.289697] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.106 [2024-07-25 20:04:10.290518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.106 [2024-07-25 20:04:10.290551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.106 [2024-07-25 20:04:10.290570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.106 [2024-07-25 20:04:10.290811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.106 [2024-07-25 20:04:10.291055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.106 [2024-07-25 20:04:10.291089] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.106 [2024-07-25 20:04:10.291105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.106 [2024-07-25 20:04:10.294694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.106 [2024-07-25 20:04:10.303599] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.106 [2024-07-25 20:04:10.303986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.106 [2024-07-25 20:04:10.304018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.106 [2024-07-25 20:04:10.304036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.106 [2024-07-25 20:04:10.304290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.106 [2024-07-25 20:04:10.304536] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.106 [2024-07-25 20:04:10.304560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.106 [2024-07-25 20:04:10.304576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.106 [2024-07-25 20:04:10.308168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.106 [2024-07-25 20:04:10.317488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.106 [2024-07-25 20:04:10.317890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.317926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.317944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.318194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.318437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.318462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.318477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.322069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.331394] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.331872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.331902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.331920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.332173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.332416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.332440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.332456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.336041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.345367] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.345882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.345933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.345951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.346210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.346454] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.346478] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.346494] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.350085] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.359385] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.359790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.359821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.359839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.360094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.360345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.360370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.360386] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.363955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.373258] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.373661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.373692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.373709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.373947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.374208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.374234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.374250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.377819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.387113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.387533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.387564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.387582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.387820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.388074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.388102] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.388119] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.391690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.400980] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.401393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.401423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.401441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.401679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.401922] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.401946] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.401961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.405554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.414848] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.415232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.415264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.415281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.107 [2024-07-25 20:04:10.415519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.107 [2024-07-25 20:04:10.415763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.107 [2024-07-25 20:04:10.415787] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.107 [2024-07-25 20:04:10.415802] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.107 [2024-07-25 20:04:10.419389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.107 [2024-07-25 20:04:10.428883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.107 [2024-07-25 20:04:10.429266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.107 [2024-07-25 20:04:10.429297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.107 [2024-07-25 20:04:10.429315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.429553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.429796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.429820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.429836] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.433425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.442924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.443360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.443392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.443409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.443649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.443891] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.443915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.443931] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.447517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.456819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.457229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.457260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.457283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.457523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.457765] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.457789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.457805] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.461393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.470683] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.471077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.471108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.471126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.471364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.471607] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.471631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.471647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.475236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.484520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.484934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.484966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.484983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.485240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.485484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.485508] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.485524] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.489110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.498391] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.498797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.498827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.498844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.499098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.499342] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.499372] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.499389] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.502958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.512281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.512797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.512852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.512870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.513127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.513371] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.513396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.513411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.516979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.108 [2024-07-25 20:04:10.526277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.108 [2024-07-25 20:04:10.526732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.108 [2024-07-25 20:04:10.526785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.108 [2024-07-25 20:04:10.526803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.108 [2024-07-25 20:04:10.527041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.108 [2024-07-25 20:04:10.527300] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.108 [2024-07-25 20:04:10.527325] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.108 [2024-07-25 20:04:10.527341] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.108 [2024-07-25 20:04:10.530910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.369 [2024-07-25 20:04:10.540229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.369 [2024-07-25 20:04:10.540643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.369 [2024-07-25 20:04:10.540674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.369 [2024-07-25 20:04:10.540691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.369 [2024-07-25 20:04:10.540929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.369 [2024-07-25 20:04:10.541182] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.369 [2024-07-25 20:04:10.541206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.369 [2024-07-25 20:04:10.541222] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.369 [2024-07-25 20:04:10.544799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.369 [2024-07-25 20:04:10.554120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.369 [2024-07-25 20:04:10.554545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.369 [2024-07-25 20:04:10.554576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.369 [2024-07-25 20:04:10.554594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.369 [2024-07-25 20:04:10.554832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.369 [2024-07-25 20:04:10.555091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.369 [2024-07-25 20:04:10.555117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.369 [2024-07-25 20:04:10.555133] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.369 [2024-07-25 20:04:10.558699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.369 [2024-07-25 20:04:10.567981] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.369 [2024-07-25 20:04:10.568448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.369 [2024-07-25 20:04:10.568500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.369 [2024-07-25 20:04:10.568518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.369 [2024-07-25 20:04:10.568756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.369 [2024-07-25 20:04:10.568999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.369 [2024-07-25 20:04:10.569023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.369 [2024-07-25 20:04:10.569038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.369 [2024-07-25 20:04:10.572624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.369 [2024-07-25 20:04:10.581913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.369 [2024-07-25 20:04:10.582323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.369 [2024-07-25 20:04:10.582354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.369 [2024-07-25 20:04:10.582371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.369 [2024-07-25 20:04:10.582609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.369 [2024-07-25 20:04:10.582852] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.369 [2024-07-25 20:04:10.582876] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.369 [2024-07-25 20:04:10.582892] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.369 [2024-07-25 20:04:10.586474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.369 [2024-07-25 20:04:10.595756] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.369 [2024-07-25 20:04:10.596157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.369 [2024-07-25 20:04:10.596188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.369 [2024-07-25 20:04:10.596206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.369 [2024-07-25 20:04:10.596451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.369 [2024-07-25 20:04:10.596694] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.369 [2024-07-25 20:04:10.596718] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.369 [2024-07-25 20:04:10.596734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.369 [2024-07-25 20:04:10.600320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.369 [2024-07-25 20:04:10.609609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.369 [2024-07-25 20:04:10.609987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.610018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.610035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.610288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.610533] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.610557] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.610573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.614159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.623648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.624056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.624093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.624111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.624349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.624592] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.624617] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.624632] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.628216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.637503] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.637939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.637969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.637987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.638243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.638488] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.638512] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.638534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.642136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.651451] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.651860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.651891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.651909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.652158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.652402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.652426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.652442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.656030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.665366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.665785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.665816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.665833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.666082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.666326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.666350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.666367] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.669947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.679282] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.679697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.679728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.679745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.679983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.680246] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.680272] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.680287] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.683865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.693172] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.693609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.693658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.693675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.370 [2024-07-25 20:04:10.693913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.370 [2024-07-25 20:04:10.694166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.370 [2024-07-25 20:04:10.694191] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.370 [2024-07-25 20:04:10.694206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.370 [2024-07-25 20:04:10.697786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.370 [2024-07-25 20:04:10.707122] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.370 [2024-07-25 20:04:10.707570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.370 [2024-07-25 20:04:10.707622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.370 [2024-07-25 20:04:10.707640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.707878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.708133] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.708157] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.708174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.711757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.371 [2024-07-25 20:04:10.721073] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.371 [2024-07-25 20:04:10.721488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.371 [2024-07-25 20:04:10.721518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.371 [2024-07-25 20:04:10.721536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.721775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.722018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.722042] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.722067] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.725652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.371 [2024-07-25 20:04:10.734963] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.371 [2024-07-25 20:04:10.735434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.371 [2024-07-25 20:04:10.735487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.371 [2024-07-25 20:04:10.735505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.735749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.735992] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.736015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.736031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.739631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.371 [2024-07-25 20:04:10.748936] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.371 [2024-07-25 20:04:10.749332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.371 [2024-07-25 20:04:10.749363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.371 [2024-07-25 20:04:10.749381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.749620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.749863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.749888] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.749903] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.753503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.371 [2024-07-25 20:04:10.762812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.371 [2024-07-25 20:04:10.763200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.371 [2024-07-25 20:04:10.763231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.371 [2024-07-25 20:04:10.763249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.763486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.763729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.763754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.763770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.767361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.371 [2024-07-25 20:04:10.776660] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.371 [2024-07-25 20:04:10.777016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.371 [2024-07-25 20:04:10.777047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.371 [2024-07-25 20:04:10.777077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.777323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.777566] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.777590] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.777611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.781198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.371 [2024-07-25 20:04:10.790717] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.371 [2024-07-25 20:04:10.791123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.371 [2024-07-25 20:04:10.791154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.371 [2024-07-25 20:04:10.791172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.371 [2024-07-25 20:04:10.791410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.371 [2024-07-25 20:04:10.791653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.371 [2024-07-25 20:04:10.791677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.371 [2024-07-25 20:04:10.791693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.371 [2024-07-25 20:04:10.795284] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.631 [2024-07-25 20:04:10.804585] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.631 [2024-07-25 20:04:10.804988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.805018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.805036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.805289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.805533] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.805557] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.805573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.809160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.818452] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.818862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.818893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.818910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.819168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.819412] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.819437] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.819453] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.823022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.832322] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.832719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.832754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.832773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.833011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.833270] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.833296] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.833312] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.836881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.846178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.846579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.846610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.846627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.846865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.847125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.847151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.847167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.850735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.860022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.860441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.860473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.860491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.860730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.860973] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.860998] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.861013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.864601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.873886] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.874306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.874337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.874354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.874592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.874841] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.874866] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.874882] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.878483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.887771] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.888175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.888206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.632 [2024-07-25 20:04:10.888224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.632 [2024-07-25 20:04:10.888463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.632 [2024-07-25 20:04:10.888706] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.632 [2024-07-25 20:04:10.888730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.632 [2024-07-25 20:04:10.888745] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.632 [2024-07-25 20:04:10.892333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.632 [2024-07-25 20:04:10.901621] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.632 [2024-07-25 20:04:10.901997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.632 [2024-07-25 20:04:10.902028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.902046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.902298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.902543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.902567] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.902583] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.906168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.915654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.916028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.916068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.916095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.916336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.916579] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.916603] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.916618] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.920212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.929488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.929888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.929919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.929936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.930193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.930437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.930462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.930478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.934047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.943347] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.943749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.943779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.943797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.944035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.944292] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.944318] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.944334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.947905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.957235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.957592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.957624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.957642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.957881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.958144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.958169] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.958185] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.961754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.971258] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.971673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.971704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.971727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.971966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.972226] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.972252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.972268] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.975837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.985128] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.985543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.985574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.985591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.985830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.986087] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.986113] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.986129] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 [2024-07-25 20:04:10.989698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 [2024-07-25 20:04:10.999005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.633 [2024-07-25 20:04:10.999388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.633 [2024-07-25 20:04:10.999419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.633 [2024-07-25 20:04:10.999437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.633 [2024-07-25 20:04:10.999675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.633 [2024-07-25 20:04:10.999918] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.633 [2024-07-25 20:04:10.999942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.633 [2024-07-25 20:04:10.999958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4130097 Killed "${NVMF_APP[@]}" "$@" 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.633 [2024-07-25 20:04:11.003545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4131053 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4131053 00:34:01.633 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 4131053 ']' 00:34:01.634 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.634 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.634 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.634 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.634 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.634 [2024-07-25 20:04:11.012851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.634 [2024-07-25 20:04:11.013232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.634 [2024-07-25 20:04:11.013268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.634 [2024-07-25 20:04:11.013287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.634 [2024-07-25 20:04:11.013526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.634 [2024-07-25 20:04:11.013769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.634 [2024-07-25 20:04:11.013794] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.634 [2024-07-25 20:04:11.013810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.634 [2024-07-25 20:04:11.017413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.634 [2024-07-25 20:04:11.026721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.634 [2024-07-25 20:04:11.027114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.634 [2024-07-25 20:04:11.027146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.634 [2024-07-25 20:04:11.027164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.634 [2024-07-25 20:04:11.027408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.634 [2024-07-25 20:04:11.027656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.634 [2024-07-25 20:04:11.027678] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.634 [2024-07-25 20:04:11.027693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.634 [2024-07-25 20:04:11.030973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.634 [2024-07-25 20:04:11.040201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.634 [2024-07-25 20:04:11.040573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.634 [2024-07-25 20:04:11.040601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.634 [2024-07-25 20:04:11.040618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.634 [2024-07-25 20:04:11.040856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.634 [2024-07-25 20:04:11.041100] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.634 [2024-07-25 20:04:11.041124] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.634 [2024-07-25 20:04:11.041144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.634 [2024-07-25 20:04:11.044383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.634 [2024-07-25 20:04:11.053451] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:01.634 [2024-07-25 20:04:11.053523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.634 [2024-07-25 20:04:11.053803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.634 [2024-07-25 20:04:11.054205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.634 [2024-07-25 20:04:11.054234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.634 [2024-07-25 20:04:11.054251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.634 [2024-07-25 20:04:11.054481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.634 [2024-07-25 20:04:11.054703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.634 [2024-07-25 20:04:11.054724] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.634 [2024-07-25 20:04:11.054738] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.634 [2024-07-25 20:04:11.058208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.894 [2024-07-25 20:04:11.067706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.894 [2024-07-25 20:04:11.068158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.894 [2024-07-25 20:04:11.068187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.894 [2024-07-25 20:04:11.068203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.894 [2024-07-25 20:04:11.068447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.894 [2024-07-25 20:04:11.068653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.894 [2024-07-25 20:04:11.068674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.894 [2024-07-25 20:04:11.068687] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.894 [2024-07-25 20:04:11.071787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.894 [2024-07-25 20:04:11.081015] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.894 [2024-07-25 20:04:11.081534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.894 [2024-07-25 20:04:11.081562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.894 [2024-07-25 20:04:11.081577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.894 [2024-07-25 20:04:11.081833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.894 [2024-07-25 20:04:11.082039] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.894 [2024-07-25 20:04:11.082084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.894 [2024-07-25 20:04:11.082106] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.894 [2024-07-25 20:04:11.085229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.894 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.894 [2024-07-25 20:04:11.094520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.894 [2024-07-25 20:04:11.094907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.894 [2024-07-25 20:04:11.094935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.894 [2024-07-25 20:04:11.094951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.095182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.095402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.095423] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.095438] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.098767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.108053] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.108455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.108483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.108499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.108714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.108941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.108962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.108976] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.112271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.121470] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.121828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.121856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.121872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.122117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.122330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.122356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.122370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.124601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:01.895 [2024-07-25 20:04:11.125506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.134843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.135427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.135465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.135485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.135735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.135945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.135966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.135983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.139129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.148444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.149022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.149077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.149107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.149355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.149578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.149598] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.149612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.152682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.161730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.162167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.162197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.162213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.162448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.162670] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.162691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.162704] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.165748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.175135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.175669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.175706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.175724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.175981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.176227] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.176251] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.176267] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.179350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.188547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.895 [2024-07-25 20:04:11.189002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.895 [2024-07-25 20:04:11.189034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.895 [2024-07-25 20:04:11.189052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.895 [2024-07-25 20:04:11.189301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.895 [2024-07-25 20:04:11.189526] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.895 [2024-07-25 20:04:11.189547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.895 [2024-07-25 20:04:11.189562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.895 [2024-07-25 20:04:11.192630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.895 [2024-07-25 20:04:11.201835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.202253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.202281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.202297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.202540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.202746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.202766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.202780] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.205854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.210584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.896 [2024-07-25 20:04:11.210614] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.896 [2024-07-25 20:04:11.210643] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.896 [2024-07-25 20:04:11.210655] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.896 [2024-07-25 20:04:11.210665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.896 [2024-07-25 20:04:11.210715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.896 [2024-07-25 20:04:11.210777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.896 [2024-07-25 20:04:11.210780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.896 [2024-07-25 20:04:11.215434] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.215918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.215951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.215969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.216198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.216420] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.216442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.216458] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.219735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.228979] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.229493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.229533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.229553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.229775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.229998] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.230021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.230038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.233301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.242703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.243230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.243270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.243289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.243511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.243734] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.243756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.243774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.247018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.256294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.256791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.256829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.256849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.257095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.257320] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.257356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.257373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.260542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.269879] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.270411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.270463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.270483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.270716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.270932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.270953] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.270969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.274154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.283364] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.283967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.284006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.284025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.284257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.284495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.284517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.284534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.287891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.296853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.297238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.297266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.297283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.896 [2024-07-25 20:04:11.297512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.896 [2024-07-25 20:04:11.297726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.896 [2024-07-25 20:04:11.297747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.896 [2024-07-25 20:04:11.297768] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.896 [2024-07-25 20:04:11.300933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.896 [2024-07-25 20:04:11.310413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.896 [2024-07-25 20:04:11.310784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.896 [2024-07-25 20:04:11.310813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:01.896 [2024-07-25 20:04:11.310829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:01.897 [2024-07-25 20:04:11.311044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:01.897 [2024-07-25 20:04:11.311276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.897 [2024-07-25 20:04:11.311299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.897 [2024-07-25 20:04:11.311314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.897 [2024-07-25 20:04:11.314585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.897 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.897 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:01.897 20:04:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:01.897 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.897 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.157 [2024-07-25 20:04:11.323988] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.157 [2024-07-25 20:04:11.324373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.157 [2024-07-25 20:04:11.324402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:02.157 [2024-07-25 20:04:11.324418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:02.157 [2024-07-25 20:04:11.324648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:02.157 [2024-07-25 20:04:11.324860] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.157 [2024-07-25 20:04:11.324890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.158 [2024-07-25 20:04:11.324904] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.158 [2024-07-25 20:04:11.328194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.158 [2024-07-25 20:04:11.337585] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.158 [2024-07-25 20:04:11.337946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.158 [2024-07-25 20:04:11.337974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:02.158 [2024-07-25 20:04:11.337990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:02.158 [2024-07-25 20:04:11.338021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.158 [2024-07-25 20:04:11.338219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:02.158 [2024-07-25 20:04:11.338452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.158 [2024-07-25 20:04:11.338473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.158 [2024-07-25 20:04:11.338487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.158 [2024-07-25 20:04:11.341654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.158 [2024-07-25 20:04:11.351082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.158 [2024-07-25 20:04:11.351478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.158 [2024-07-25 20:04:11.351506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:02.158 [2024-07-25 20:04:11.351522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:02.158 [2024-07-25 20:04:11.351750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:02.158 [2024-07-25 20:04:11.351971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.158 [2024-07-25 20:04:11.351992] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.158 [2024-07-25 20:04:11.352005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.158 [2024-07-25 20:04:11.355256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.158 [2024-07-25 20:04:11.364736] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.158 [2024-07-25 20:04:11.365134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.158 [2024-07-25 20:04:11.365163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:02.158 [2024-07-25 20:04:11.365179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:02.158 [2024-07-25 20:04:11.365418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:02.158 [2024-07-25 20:04:11.365631] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.158 [2024-07-25 20:04:11.365652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.158 [2024-07-25 20:04:11.365666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.158 [2024-07-25 20:04:11.368935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.158 [2024-07-25 20:04:11.378236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.158 [2024-07-25 20:04:11.378769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.158 [2024-07-25 20:04:11.378819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:02.158 [2024-07-25 20:04:11.378837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:02.158 [2024-07-25 20:04:11.379121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:02.158 [2024-07-25 20:04:11.379369] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.158 [2024-07-25 20:04:11.379391] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.158 [2024-07-25 20:04:11.379416] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.158 Malloc0 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.158 [2024-07-25 20:04:11.382592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.158 [2024-07-25 20:04:11.391909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.158 [2024-07-25 20:04:11.392301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.158 [2024-07-25 20:04:11.392330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a1e0 with addr=10.0.0.2, port=4420 00:34:02.158 [2024-07-25 20:04:11.392346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a1e0 is same with the state(5) to be set 00:34:02.158 [2024-07-25 20:04:11.392561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a1e0 (9): Bad file descriptor 00:34:02.158 [2024-07-25 20:04:11.392788] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.158 [2024-07-25 20:04:11.392809] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.158 [2024-07-25 20:04:11.392823] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.158 [2024-07-25 20:04:11.396079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.158 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.159 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.159 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.159 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.159 [2024-07-25 20:04:11.401159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.159 20:04:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.159 20:04:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4130325 00:34:02.159 [2024-07-25 20:04:11.405532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.159 [2024-07-25 20:04:11.440324] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:12.143 00:34:12.143 Latency(us) 00:34:12.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.143 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:12.143 Verification LBA range: start 0x0 length 0x4000 00:34:12.143 Nvme1n1 : 15.02 6855.65 26.78 8996.68 0.00 8050.51 552.20 18350.08 00:34:12.143 =================================================================================================================== 00:34:12.143 Total : 6855.65 26.78 8996.68 0.00 8050.51 552.20 18350.08 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.143 rmmod nvme_tcp 00:34:12.143 rmmod nvme_fabrics 00:34:12.143 rmmod nvme_keyring 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 4131053 ']' 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 4131053 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 4131053 ']' 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 4131053 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4131053 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4131053' 00:34:12.143 killing process with pid 4131053 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 4131053 00:34:12.143 20:04:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 4131053 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.143 20:04:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.046 20:04:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.046 00:34:14.046 real 0m22.324s 00:34:14.046 user 0m59.834s 00:34:14.046 sys 0m4.364s 00:34:14.046 20:04:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:14.046 20:04:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.046 ************************************ 00:34:14.046 END TEST nvmf_bdevperf 00:34:14.046 ************************************ 00:34:14.046 20:04:23 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.046 20:04:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:14.046 20:04:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:14.046 20:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.046 ************************************ 00:34:14.046 START TEST nvmf_target_disconnect 00:34:14.046 ************************************ 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.046 * Looking for test storage... 00:34:14.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.046 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.047 20:04:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:15.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:15.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.945 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:15.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:15.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:34:15.946 00:34:15.946 --- 10.0.0.2 ping statistics --- 00:34:15.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.946 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:34:15.946 00:34:15.946 --- 10.0.0.1 ping statistics --- 00:34:15.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.946 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:15.946 20:04:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.205 ************************************ 00:34:16.205 START TEST nvmf_target_disconnect_tc1 00:34:16.205 ************************************ 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.205 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.205 [2024-07-25 20:04:25.462805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-07-25 20:04:25.462884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd0740 with addr=10.0.0.2, port=4420 00:34:16.205 [2024-07-25 20:04:25.462922] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:16.205 [2024-07-25 20:04:25.462948] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:16.205 [2024-07-25 20:04:25.462963] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:16.205 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:16.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:16.205 Initializing NVMe Controllers 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:16.205 00:34:16.205 real 0m0.097s 00:34:16.205 user 0m0.044s 00:34:16.205 sys 0m0.052s 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:16.205 ************************************ 00:34:16.205 END TEST nvmf_target_disconnect_tc1 00:34:16.205 ************************************ 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.205 ************************************ 00:34:16.205 START TEST nvmf_target_disconnect_tc2 00:34:16.205 ************************************ 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4134120 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4134120 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4134120 ']' 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:16.205 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.205 [2024-07-25 20:04:25.573196] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:16.205 [2024-07-25 20:04:25.573285] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.205 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.464 [2024-07-25 20:04:25.642776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.464 [2024-07-25 20:04:25.738422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.464 [2024-07-25 20:04:25.738486] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.464 [2024-07-25 20:04:25.738515] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.464 [2024-07-25 20:04:25.738528] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.464 [2024-07-25 20:04:25.738538] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.464 [2024-07-25 20:04:25.738689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.464 [2024-07-25 20:04:25.738752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.464 [2024-07-25 20:04:25.738803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.464 [2024-07-25 20:04:25.738805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.464 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.722 Malloc0 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.722 [2024-07-25 20:04:25.920412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.722 [2024-07-25 20:04:25.948629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4134227 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:16.722 20:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.722 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.663 20:04:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4134120 00:34:18.664 20:04:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 [2024-07-25 20:04:27.974366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 [2024-07-25 20:04:27.974677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Write completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 [2024-07-25 20:04:27.974987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.664 starting I/O failed 00:34:18.664 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Write completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Write completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Write completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 Read completed with error (sct=0, sc=8) 00:34:18.665 starting I/O failed 00:34:18.665 [2024-07-25 20:04:27.975314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.665 [2024-07-25 20:04:27.975509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.975541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.975679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.975705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.975817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.975847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.976084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.976111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.976230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.976255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.976390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.976415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.976525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.976550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.976740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.976781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.976920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.976948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.977972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.977997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.978167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.978193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.978300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.978325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.978452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.978477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.978569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.978594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.978690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.978731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.978864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.978908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.979089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.979127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.979256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.979281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.979381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.665 [2024-07-25 20:04:27.979407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.665 qpair failed and we were unable to recover it. 00:34:18.665 [2024-07-25 20:04:27.979513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.979539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.979695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.979721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.979821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.979847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.979958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.979997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.980131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.980170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.980298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.980337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.980528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.980554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.980663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.980688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.980820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.980845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.980976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.981928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.981953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.982095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.982133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.982259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.982286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.982437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.982464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.982556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.982582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.982722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.982749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.982873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.982918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.983098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.983233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.983371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.983570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.983724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.983871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.983977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.666 [2024-07-25 20:04:27.984863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.666 qpair failed and we were unable to recover it. 00:34:18.666 [2024-07-25 20:04:27.984990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.985130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.985262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.985391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.985543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.985705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.985860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.985885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.986037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.986068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.986183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.986215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.986330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.986385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.986564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.986592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.986723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.986749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.986850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.986876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.987050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.987226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.987355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.987505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.987654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.987871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.987992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.988016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.988122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.988147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.988263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.988288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.988598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.988640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.988808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.988833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.988942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.988967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.989083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.989130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.989269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.989297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.989420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.989449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.989552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.989581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.989728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.989753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.989878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.989903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.990033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.990071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.990221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.667 [2024-07-25 20:04:27.990259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.667 qpair failed and we were unable to recover it. 00:34:18.667 [2024-07-25 20:04:27.990425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.990480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.990647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.990675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.990832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.990884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.991853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.991891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.992956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.992982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.993119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.993239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.993383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.993536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.993659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.993845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.993993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.994150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.994279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.994459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.994604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.994751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.994955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.994984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.995094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.995123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.995272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.995298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.995395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.668 [2024-07-25 20:04:27.995420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.668 qpair failed and we were unable to recover it. 00:34:18.668 [2024-07-25 20:04:27.995521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.995547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.995655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.995683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.995806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.995832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.995986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.996111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.996240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.996388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.996522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.996706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.996878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.996903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.997034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.997224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.997372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.997571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.997726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.997873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.997997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.998133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.998261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.998379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.998501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.998651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.998846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.998884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.999021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.999048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.999165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.999195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.999323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.999349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.999448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.669 [2024-07-25 20:04:27.999476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.669 qpair failed and we were unable to recover it. 00:34:18.669 [2024-07-25 20:04:27.999575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:27.999603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:27.999731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:27.999756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:27.999883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:27.999911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.000950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.000981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.001084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.001122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.001260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.001287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.001388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.001415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.001598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.001643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.001777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.001806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.001948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.001974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.002079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.002106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.002238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.002263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.002362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.002388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.002519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.002546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.002673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.002700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.002855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.002881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.003010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.003036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.003204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.003232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.003354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.003398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.003546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.003571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.003725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.003750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.003846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.003872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.004896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.004997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.670 [2024-07-25 20:04:28.005022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.670 qpair failed and we were unable to recover it. 00:34:18.670 [2024-07-25 20:04:28.005141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.005168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.005294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.005319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.005423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.005448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.005549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.005574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.005700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.005725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.005822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.005848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.005992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.006179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.006311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.006460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.006581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.006734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.006876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.006916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.007920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.007945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.008894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.008919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.009067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.009093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.009201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.009226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.009355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.009380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.009484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.009511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.009665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.009691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.671 qpair failed and we were unable to recover it. 00:34:18.671 [2024-07-25 20:04:28.009844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.671 [2024-07-25 20:04:28.009869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.010941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.010967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.011171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.011310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.011457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.011578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.011705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.011825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.011963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.012001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.012113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.012140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.012270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.012295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.012437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.012465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.012680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.012736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.012985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.013037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.013218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.013246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.013395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.013424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.013584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.013645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.013910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.013965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.014111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.014137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.014247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.014273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.014429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.014455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.014606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.014635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.014771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.014799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.014946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.014971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.015121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.015148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.015249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.015275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.015405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.015431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.015538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.015565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.015702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.015730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.672 [2024-07-25 20:04:28.015909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.672 [2024-07-25 20:04:28.015938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.672 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.016121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.016245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.016412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.016575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.016745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.016892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.016991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.017017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.017154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.017180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.017283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.017308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.017473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.017501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.017665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.017693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.017848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.017873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.018953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.018978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.019101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.019139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.019236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.019261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.019444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.019470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.019602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.019627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.019743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.019772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.019907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.019939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.020082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.020125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.020229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.020254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.020393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.020419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.020574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.020616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.020816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.020842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.020969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.020995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.021099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.021125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.021222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.021247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.021409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.021437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.021621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.021672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.673 [2024-07-25 20:04:28.021882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.673 [2024-07-25 20:04:28.021938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.673 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.022134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.022162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.022269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.022295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.022420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.022445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.022573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.022603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.022735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.022791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.022953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.022980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.023114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.023141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.023236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.023262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.023383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.023411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.023560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.023613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.023804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.023855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.023974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.024971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.024996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.025113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.025141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.025238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.025263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.025416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.025441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.025533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.025558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.025678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.025703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.025834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.025859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.026000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.026028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.026157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.026182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.026306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.026332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.026430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.674 [2024-07-25 20:04:28.026455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.674 qpair failed and we were unable to recover it. 00:34:18.674 [2024-07-25 20:04:28.026560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.026585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.026694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.026719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.026878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.026906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.027079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.027243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.027380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.027502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.027646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.027814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.027976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.028016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.028177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.028216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.028336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.028366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.028561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.028605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.028754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.028797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.028901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.028928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.029927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.029954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.030116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.030274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.030443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.030574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.030729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.030856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.030991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.031150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.031275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.031454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.031574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.031753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.031872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.031898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.032001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.032028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.032143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.032168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.675 [2024-07-25 20:04:28.032273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.675 [2024-07-25 20:04:28.032297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.675 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.032400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.032445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.032627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.032674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.032817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.032842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.032951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.032978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.033962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.033986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.034106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.034145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.034253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.034298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.034538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.034591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.034730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.034755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.034883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.034908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.035958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.035985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.036872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.036898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.037002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.037027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.037141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.037168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.037268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.037293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.037387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.037413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.037537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.676 [2024-07-25 20:04:28.037563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.676 qpair failed and we were unable to recover it. 00:34:18.676 [2024-07-25 20:04:28.037659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.037684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.037813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.037839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.037981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.038153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.038308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.038515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.038666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.038800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.038941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.038968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.039131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.039262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.039441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.039561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.039687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.039838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.039975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.040108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.040259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.040394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.040601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.040794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.040969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.040995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.041139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.041188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.041304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.041333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.041496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.041538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.041724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.041771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.041870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.041895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.041995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.042021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.042134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.042172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.042275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.042301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.042424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.042450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.042566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.042594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.042822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.042878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.042989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.677 [2024-07-25 20:04:28.043016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.677 qpair failed and we were unable to recover it. 00:34:18.677 [2024-07-25 20:04:28.043145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.043170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.043274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.043302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.043468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.043498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.043702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.043730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.043855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.043880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.043984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.044115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.044254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.044412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.044629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.044794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.044936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.044961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.045088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.045115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.045218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.045244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.045432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.045458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.045644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.045673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.045778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.045806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.045944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.045972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.046107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.046147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.046305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.046349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.046497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.046540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.046681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.046724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.046849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.046874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.046989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.047027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.047166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.047194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.047405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.047448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.047589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.047635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.047763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.047809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.047918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.047952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.048120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.048146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.048250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.048274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.048415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.048442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.048599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.048627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.678 [2024-07-25 20:04:28.048758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.678 [2024-07-25 20:04:28.048785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.678 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.048927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.048958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.049097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.049137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.049287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.049331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.049477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.049521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.049659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.049703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.049811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.049838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.050002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.050028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.050165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.050192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.050316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.050345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.050518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.050545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.050734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.050783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.050926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.050955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.051115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.051142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.051237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.051262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.051362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.051388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.051576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.051621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.051736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.051780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.051940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.051965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.052098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.052124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.052248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.052291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.052475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.052518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.052629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.052664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.052807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.052833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.052928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.052955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.053084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.053110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.053298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.053340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.053502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.053550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.053760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.053811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.053953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.053978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.679 [2024-07-25 20:04:28.054105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.679 [2024-07-25 20:04:28.054131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.679 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.054296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.054322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.054571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.054599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.054728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.054753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.054927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.054954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.055101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.055127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.055237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.055263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.055437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.055464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.055605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.055650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.055816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.055844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.055978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.056005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.056141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.056167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.056293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.056317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.056487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.056520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.056695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.056740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.056904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.056932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.057057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.057089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.057194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.057218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.057337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.057365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.057555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.057613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.057751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.057778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.057894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.057921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.058093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.058119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.058220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.058245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.058370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.058395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.058491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.058515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.058670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.058727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.058873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.058902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.059046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.059101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.059250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.059279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.059411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.059440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.059586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.059612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.059737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.059763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.059873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.059900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.060053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.060086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.680 qpair failed and we were unable to recover it. 00:34:18.680 [2024-07-25 20:04:28.060198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.680 [2024-07-25 20:04:28.060243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.060367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.060393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.060539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.060581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.060730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.060779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.060941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.060969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.061100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.061128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.061288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.061316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.061477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.061505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.061613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.061641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.061781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.061824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.061950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.061976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.062080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.062112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.062235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.062278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.062449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.062492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.062582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.062608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.062762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.062789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.062920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.062945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.063117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.063145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.063277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.063304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.063451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.063479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.063614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.063641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.063782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.063810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.063937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.063964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.064069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.064111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.064209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.064235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.064339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.064364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.064512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.064540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.064678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.064707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.064822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.064851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.065002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.065046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.065191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.065217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.065370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.065396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.065538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.065566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.065695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.065724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.065832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aa390 is same with the state(5) to be set 00:34:18.681 [2024-07-25 20:04:28.066066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.066105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.681 qpair failed and we were unable to recover it. 00:34:18.681 [2024-07-25 20:04:28.066216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.681 [2024-07-25 20:04:28.066243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.066360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.066404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.066552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.066601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.066720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.066762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.066889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.066916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.067012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.067038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.067173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.067198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.067350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.067393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.067568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.067610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.067751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.067793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.067918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.067943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.068128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.068172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.068294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.068323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.068487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.068516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.068628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.068657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.068793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.068821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.068989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.069023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.069197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.069224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.069379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.069409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.069576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.069605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.069774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.069803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.069954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.069979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.070121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.070160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.070309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.070338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.070500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.070528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.070682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.070729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.070952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.070998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.071122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.071147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.071249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.071273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.071467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.071526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.071676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.071720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.071858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.071901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.072055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.072087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.072218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.072244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.072412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.072444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.072582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.682 [2024-07-25 20:04:28.072628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.682 qpair failed and we were unable to recover it. 00:34:18.682 [2024-07-25 20:04:28.072733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.072758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.072875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.072901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.073053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.073084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.073257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.073299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.073505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.073537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.073735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.073780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.073907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.073933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.074038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.074074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.074228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.074272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.074426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.074455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.074722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.074767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.074892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.074917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.075044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.075076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.075225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.075269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.075387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.075417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.075582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.075624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.075787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.075813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.075937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.075962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.076057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.076089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.076210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.076238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.076402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.076450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.076574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.076617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.076748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.076773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.076897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.076922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.077072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.077099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.077247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.077289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.077457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.077500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.077653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.077678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.077818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.077858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.077991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.078019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.078177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.078207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.078336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.078365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.078479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.078508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.078675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.078704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.683 [2024-07-25 20:04:28.078862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.683 [2024-07-25 20:04:28.078888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.683 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.078986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.079013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.079158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.079188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.079366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.079394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.079543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.079569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.079687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.079716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.079881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.079906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.080009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.080036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.080193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.080219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.080369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.080414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.080561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.080604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.080721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.080764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.080921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.080947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.081081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.081107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.081248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.081276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.081416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.081460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.081577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.081621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.081754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.081779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.081908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.081934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.082055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.082087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.082236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.082280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.082428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.082474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.082644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.082690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.082838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.082864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.082968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.082996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.083174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.083204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.083313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.083347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.083527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.083556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.684 [2024-07-25 20:04:28.083670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.684 [2024-07-25 20:04:28.083699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.684 qpair failed and we were unable to recover it. 00:34:18.967 [2024-07-25 20:04:28.083863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.967 [2024-07-25 20:04:28.083892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.967 qpair failed and we were unable to recover it. 00:34:18.967 [2024-07-25 20:04:28.084040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.967 [2024-07-25 20:04:28.084072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.967 qpair failed and we were unable to recover it. 00:34:18.967 [2024-07-25 20:04:28.084202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.967 [2024-07-25 20:04:28.084229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.967 qpair failed and we were unable to recover it. 00:34:18.967 [2024-07-25 20:04:28.084376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.967 [2024-07-25 20:04:28.084406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.967 qpair failed and we were unable to recover it. 00:34:18.967 [2024-07-25 20:04:28.084531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.084573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.084713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.084741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.084864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.084906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.085025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.085052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.085195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.085221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.085376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.085404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.085551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.085579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.085702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.085768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.085891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.085933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.086101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.086127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.086252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.086278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.086391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.086420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.086551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.086579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.086698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.086724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.086894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.086926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.087047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.087078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.087228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.087253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.087398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.087427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.087564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.087593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.087725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.087769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.087908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.087937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.088087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.088113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.088238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.088264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.088411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.088439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.088614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.088663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.088777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.088821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.089011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.089039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.089214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.089240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.089339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.089364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.089516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.089546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.089712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.089741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.089844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.089873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.090012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.090040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.090165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.090197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.090319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.090344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.090496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.090524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.968 [2024-07-25 20:04:28.090659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.968 [2024-07-25 20:04:28.090687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.968 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.090820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.090849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.090984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.091012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.091139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.091165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.091316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.091358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.091464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.091493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.091610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.091651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.091821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.091849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.091999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.092041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.092177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.092205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.092332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.092375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.092587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.092638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.092800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.092860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.093001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.093029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.093221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.093248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.093373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.093399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.093520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.093563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.093682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.093711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.093863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.093906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.094075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.094119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.094241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.094267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.094373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.094400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.094528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.094555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.094684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.094714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.094876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.094905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.095039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.095074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.095248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.095274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.095399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.095425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.095557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.095600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.095742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.095774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.095941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.095969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.096122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.096149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.096302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.096327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.096451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.096476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.096573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.096599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.096780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.096809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.969 [2024-07-25 20:04:28.097008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.969 [2024-07-25 20:04:28.097036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.969 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.097182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.097211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.097313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.097356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.097503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.097528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.097663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.097705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.097843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.097872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.097990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.098016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.098171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.098198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.098378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.098430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.098600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.098626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.098733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.098758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.098860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.098887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.098989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.099150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.099273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.099470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.099598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.099768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.099957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.099985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.100137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.100163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.100269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.100295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.100417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.100442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.100570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.100612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.100751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.100778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.100941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.100970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.101123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.101150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.101280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.101307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.101470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.101497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.101603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.101629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.101791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.101835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.102891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.970 [2024-07-25 20:04:28.102917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.970 qpair failed and we were unable to recover it. 00:34:18.970 [2024-07-25 20:04:28.103041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.103074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.103180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.103207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.103332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.103358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.103508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.103534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.103651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.103687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.103830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.103856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.103988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.104174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.104325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.104477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.104641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.104819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.104943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.104968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.105151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.105180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.105299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.105324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.105427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.105453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.105588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.105615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.105745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.105771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.105873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.105900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.106118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.106268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.106397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.106545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.106669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.106853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.106981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.107133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.107260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.107426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.107622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.107772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.107939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.107970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.108121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.108147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.108244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.108269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.108414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.108442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.108618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.108643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.108787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.971 [2024-07-25 20:04:28.108815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.971 qpair failed and we were unable to recover it. 00:34:18.971 [2024-07-25 20:04:28.108953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.108982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.109128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.109154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.109286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.109327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.109465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.109508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.109637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.109663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.109813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.109854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.109965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.109995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.110134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.110165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.110255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.110280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.110445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.110473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.110618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.110644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.110771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.110798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.110977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.111134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.111259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.111432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.111635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.111793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.111913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.111939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.112093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.112219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.112404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.112556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.112681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.112862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.112984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.113011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.113142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.113168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.113298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.113324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.113421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.113446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.113568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.113593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.972 [2024-07-25 20:04:28.113711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.972 [2024-07-25 20:04:28.113742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.972 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.113906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.113932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.114057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.114090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.114221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.114251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.114411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.114436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.114564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.114605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.114710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.114753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.114881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.114907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.115966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.115992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.116119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.116148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.116298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.116324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.116455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.116482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.116641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.116669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.116789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.116814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.116944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.116970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.117094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.117122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.117272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.117297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.117418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.117444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.117598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.117626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.117749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.117775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.117864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.117889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.118958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.118986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.119140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.119166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.119282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.119307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.119449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.119477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.119617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.973 [2024-07-25 20:04:28.119642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.973 qpair failed and we were unable to recover it. 00:34:18.973 [2024-07-25 20:04:28.119749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.119774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.119884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.119923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.120053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.120089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.120202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.120245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.120430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.120455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.120578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.120604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.120704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.120735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.120859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.120901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.121034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.121070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.121201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.121245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.121353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.121381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.121527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.121552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.121682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.121724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.121833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.121861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.122898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.122923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.123041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.123073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.123263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.123292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.123431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.123456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.123608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.123651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.123795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.123820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.123950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.123975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.124096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.124137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.124250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.124278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.124418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.124444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.124536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.124562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.124710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.124739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.124858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.124883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.125020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.125045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.125151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.125176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.125308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.125334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.125483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.974 [2024-07-25 20:04:28.125509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.974 qpair failed and we were unable to recover it. 00:34:18.974 [2024-07-25 20:04:28.125636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.125661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.125798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.125824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.125944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.125969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.126139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.126182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.126335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.126362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.126468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.126494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.126655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.126681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.126837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.126863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.126992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.127177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.127330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.127510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.127661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.127832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.127953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.127978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.128125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.128154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.128282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.128309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.128408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.128434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.128586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.128612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.128750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.128776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.128875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.128901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.129091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.129121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.129276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.129302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.129410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.129436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.129562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.129591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.129739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.129765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.129862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.129887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.130070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.130099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.130225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.130251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.130380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.130405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.130547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.130575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.130745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.130771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.130896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.130939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.131048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.131083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.131202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.131228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.131353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.131379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.131498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.131529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.131674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.975 [2024-07-25 20:04:28.131700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.975 qpair failed and we were unable to recover it. 00:34:18.975 [2024-07-25 20:04:28.131851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.131895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.132032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.132067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.132211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.132238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.132361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.132387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.132565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.132591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.132685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.132710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.132864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.132889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.133898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.133941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.134101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.134129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.134228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.134255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.134414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.134440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.134636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.134661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.134762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.134806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.134944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.134973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.135146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.135172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.135298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.135340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.135487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.135512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.135662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.135687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.135810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.135835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.135967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.135992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.136154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.136180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.136326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.136354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.136454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.136483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.136629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.136655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.136780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.136822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.136973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.136999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.137132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.137159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.137283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.137308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.137459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.137488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.137601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.137627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.976 [2024-07-25 20:04:28.137727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.976 [2024-07-25 20:04:28.137752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.976 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.137917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.137956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.138112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.138140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.138291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.138333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.138505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.138556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.138695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.138722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.138841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.138867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.138992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.139153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.139308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.139485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.139665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.139790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.139909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.139936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.140068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.140094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.140200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.140232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.140360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.140389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.140530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.140556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.140680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.140706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.140861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.140890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.141081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.141108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.141250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.141279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.141433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.141460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.141581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.141607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.141736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.141763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.141884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.141910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.142091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.142118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.142240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.142284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.142424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.142454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.142609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.142634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.142733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.142759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.142885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.977 [2024-07-25 20:04:28.142910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.977 qpair failed and we were unable to recover it. 00:34:18.977 [2024-07-25 20:04:28.143005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.143134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.143285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.143436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.143580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.143716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.143892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.143918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.144969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.144994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.145190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.145218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.145370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.145396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.145492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.145518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.145616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.145643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.145769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.145795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.145948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.145974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.146117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.146146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.146297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.146323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.146416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.146442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.146559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.146592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.146714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.146740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.146838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.146864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.147043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.147200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.147348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.147560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.147690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.147841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.147996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.148026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.148182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.148208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.148333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.148375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.148512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.148540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.148680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.148705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.978 [2024-07-25 20:04:28.148813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.978 [2024-07-25 20:04:28.148839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.978 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.148929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.148954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.149073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.149099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.149201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.149226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.149350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.149378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.149518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.149543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.149698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.149741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.149924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.149952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.150081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.150108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.150228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.150254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.150370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.150400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.150523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.150549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.150672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.150699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.150861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.150887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.151936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.151961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.152089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.152115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.152263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.152292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.152434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.152461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.152565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.152592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.152725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.152751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.152845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.152874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.153001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.153026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.153160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.153189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.153363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.153389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.153557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.153585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.153714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.153743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.153860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.153886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.154011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.154037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.979 qpair failed and we were unable to recover it. 00:34:18.979 [2024-07-25 20:04:28.154185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.979 [2024-07-25 20:04:28.154216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.154338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.154363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.154492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.154517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.154616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.154641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.154735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.154761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.154861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.154886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.155034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.155069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.155241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.155267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.155443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.155471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.155608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.155635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.155808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.155833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.155961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.155986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.156124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.156250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.156421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.156545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.156694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.156837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.156970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.157188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.157323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.157448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.157577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.157735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.157917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.157946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.158105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.158131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.158251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.158277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.158424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.158452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.158592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.158618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.158716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.158744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.158879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.158907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.159067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.159093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.159215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.159248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.159437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.159464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.159595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.159620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.159739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.159764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.159913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.980 [2024-07-25 20:04:28.159941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.980 qpair failed and we were unable to recover it. 00:34:18.980 [2024-07-25 20:04:28.160084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.160110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.160213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.160238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.160381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.160410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.160550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.160575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.160721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.160763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.160942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.160967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.161090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.161116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.161266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.161309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.161470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.161520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.161646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.161673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.161825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.161850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.161971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.161999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.162148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.162174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.162273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.162298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.162424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.162449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.162575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.162600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.162722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.162747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.162935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.162963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.163109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.163134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.163236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.163261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.163407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.163435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.163616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.163640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.163781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.163809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.164002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.164041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.164157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.164185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.164341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.164384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.164542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.164568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.164717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.164743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.164910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.164939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.165105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.165135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.165251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.165277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.165374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.981 [2024-07-25 20:04:28.165401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.981 qpair failed and we were unable to recover it. 00:34:18.981 [2024-07-25 20:04:28.165505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.165530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.165650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.165676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.165798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.165824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.166001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.166036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.166192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.166218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.166320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.166345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.166501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.166529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.166678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.166703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.166844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.166869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.167943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.167969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.168078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.168104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.168235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.168261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.168381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.168423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.168576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.168601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.168726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.168770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.168934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.168964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.169091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.169117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.169277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.169303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.169480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.169529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.169641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.169667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.169796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.169822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.169943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.169972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.170095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.170122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.170221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.170246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-07-25 20:04:28.170411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.982 [2024-07-25 20:04:28.170439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.170598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.170624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.170745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.170771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.170936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.170962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.171093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.171119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.171216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.171243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.171376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.171402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.171555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.171581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.171723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.171752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.171893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.171922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.172096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.172123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.172270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.172298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.172410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.172452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.172577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.172608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.172738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.172765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.172892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.172917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.173055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.173088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.173181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.173207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.173293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.173335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.173509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.173534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.173661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.173707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.173844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.173873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.174046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.174079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.174181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.174223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.174404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.174432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.174565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.174591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.174715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.174756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.174898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.174928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.175105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.175131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.175228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.175254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.175378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.175420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.175547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.175573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.175666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.175691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.175834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.175862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.176001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.176026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.176136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.983 [2024-07-25 20:04:28.176162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-07-25 20:04:28.176259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.176285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.176411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.176437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.176558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.176601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.176718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.176748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.176865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.176891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.177021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.177046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.177176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.177204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.177321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.177347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.177482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.177509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.177708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.177760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.177873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.177899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.178048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.178080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.178254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.178281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.178424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.178449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.178603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.178644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.178756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.178798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.178896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.178923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.179049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.179207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.179359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.179485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.179637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.179810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.179985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.180185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.180335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.180491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.180642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.180772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.180919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.180945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.181098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.181124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.181262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.181288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.181424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.181468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.181613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.181639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.181763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.181788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.181907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.181933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-07-25 20:04:28.182070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.984 [2024-07-25 20:04:28.182097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.182237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.182263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.182395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.182420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.182542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.182570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.182717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.182743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.182888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.182931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.183088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.183116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.183268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.183293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.183482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.183526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.183635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.183664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.183783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.183808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.183907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.183932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.184878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.184903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.185961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.185985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.186121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.186160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.186275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.186303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.186459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.186485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.186624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.186653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.186791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.186821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.186968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.186994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.187104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.187130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.187255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.187284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.187415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.187441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.187567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.187611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.985 qpair failed and we were unable to recover it. 00:34:18.985 [2024-07-25 20:04:28.187723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.985 [2024-07-25 20:04:28.187764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.187865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.187891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.188075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.188226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.188383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.188498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.188648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.188816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.188988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.189159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.189312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.189462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.189666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.189852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.189968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.189993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.190119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.190146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.190247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.190272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.190429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.190471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.190646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.190672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.190794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.190820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.190944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.190969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.191103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.191129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.191263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.191289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.191387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.191414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.191565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.191595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.191739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.191765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.191882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.191911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.192070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.192100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.192211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.192236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.192366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.192392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.192569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.192598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.192747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.192772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.192902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.192945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.193081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.193111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.193280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.193305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.193463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.986 [2024-07-25 20:04:28.193520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.986 qpair failed and we were unable to recover it. 00:34:18.986 [2024-07-25 20:04:28.193696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.193750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.193925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.193950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.194054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.194088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.194192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.194222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.194348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.194375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.194500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.194542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.194702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.194753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.194899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.194926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.195077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.195103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.195205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.195231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.195357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.195382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.195504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.195544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.195709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.195759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.195878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.195905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.196056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.196192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.196372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.196558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.196708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.196829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.196970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.197184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.197326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.197478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.197623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.197750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.197926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.197967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.198131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.198157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.198283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.198307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.198465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.198490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.987 qpair failed and we were unable to recover it. 00:34:18.987 [2024-07-25 20:04:28.198644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.987 [2024-07-25 20:04:28.198673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.198820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.198845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.198950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.198988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.199181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.199220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.199363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.199390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.199501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.199527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.199653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.199679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.199827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.199852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.199981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.200108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.200222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.200368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.200522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.200700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.200864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.200891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.201078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.201235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.201357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.201536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.201685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.201839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.201991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.202020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.202165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.202192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.202322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.202348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.202496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.202521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.202683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.202707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.202883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.202911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.203082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.203120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.203252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.203279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.203434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.203460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.203613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.203638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.203796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.203822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.203950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.203976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.204145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.988 [2024-07-25 20:04:28.204172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.988 qpair failed and we were unable to recover it. 00:34:18.988 [2024-07-25 20:04:28.204302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.204328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.204475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.204516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.204651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.204680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.204827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.204853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.204983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.205881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.205995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.206151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.206296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.206466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.206636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.206812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.206955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.206982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.207125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.207150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.207280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.207304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.207438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.207463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.207590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.207614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.207707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.207732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.207905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.207932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.208077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.208102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.208195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.208220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.208360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.208398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.208587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.208612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.208767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.208810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.208939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.208983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.209150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.209178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.209302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.209328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.209528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.209577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.209699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.209731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.209894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.209938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.210078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.210107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.210244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.989 [2024-07-25 20:04:28.210268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.989 qpair failed and we were unable to recover it. 00:34:18.989 [2024-07-25 20:04:28.210392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.210417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.210616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.210641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.210796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.210820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.210965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.210995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.211142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.211168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.211323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.211349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.211487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.211515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.211651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.211709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.211860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.211886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.211985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.212146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.212293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.212452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.212598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.212746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.212880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.212918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.213079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.213121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.213264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.213289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.213415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.213441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.213635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.213683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.213793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.213818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.213924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.213951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.214120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.214159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.214290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.214322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.214452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.214495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.214634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.214683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.214828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.214853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.214999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.215038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.215154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.215180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.215305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.215330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.215457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.215482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.215584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.215608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.990 [2024-07-25 20:04:28.215707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.990 [2024-07-25 20:04:28.215731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.990 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.215881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.215922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.216080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.216109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.216206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.216232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.216357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.216383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.216535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.216563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.216707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.216733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.216860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.216885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.217956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.217998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.218097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.218124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.218255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.218281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.218438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.218463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.218618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.218647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.218792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.218821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.218978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.219129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.219252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.219425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.219569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.219721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.219868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.219892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.220930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.220955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.221071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.221098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.221199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.221223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.221319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.991 [2024-07-25 20:04:28.221343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.991 qpair failed and we were unable to recover it. 00:34:18.991 [2024-07-25 20:04:28.221472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.221496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.221655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.221682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.221834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.221859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.222933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.222957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.223955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.223995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.224920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.224945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.225070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.225095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.225198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.225223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.225318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.225342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.225448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.225473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.992 [2024-07-25 20:04:28.225626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.992 [2024-07-25 20:04:28.225650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.992 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.225781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.225806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.225930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.225955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.226849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.226873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.227873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.227896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.228037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.228089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.228201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.228228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.228360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.228387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.228519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.228544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.228701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.228732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.228877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.228910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.229874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.229901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.230015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.230054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.230209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.230237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.230369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.230395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.230557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.993 [2024-07-25 20:04:28.230587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.993 qpair failed and we were unable to recover it. 00:34:18.993 [2024-07-25 20:04:28.230734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.230760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.230865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.230891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.231020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.231047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.231233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.231258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.231357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.231382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.231547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.231572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.231690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.231715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.231843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.231867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.232006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.232049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.232216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.232243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.232398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.232424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.232563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.232601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.232753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.232778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.232882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.232908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.233906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.233936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.234900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.234923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.994 [2024-07-25 20:04:28.235932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.994 [2024-07-25 20:04:28.235958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.994 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.236933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.236959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.237903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.237928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.238901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.238927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.239966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.239991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.240094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.240118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.240221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.240245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.240337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.240362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.240467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.240491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.240590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.240616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.995 [2024-07-25 20:04:28.240717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-25 20:04:28.240742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.995 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.240843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.240869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.240984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.241870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.241897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.242865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.242905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.243936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.243960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.244880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.244973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.245014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.245210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.245236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.245333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.245358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.245462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.245486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.245610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.245635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.996 [2024-07-25 20:04:28.245723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-25 20:04:28.245748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.996 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.245860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.245884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.245986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.246123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.246276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.246453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.246604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.246737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.246865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.246891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.247074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.247223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.247391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.247538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.247712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.247863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.247993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.248932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.248960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.249080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.249107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.249205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.249231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.249342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.249370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.249518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.249544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.249677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.249704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.997 [2024-07-25 20:04:28.249830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-25 20:04:28.249856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.997 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.249979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.250144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.250268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.250416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.250535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.250733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.250915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.250941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.251925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.251949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.252890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.252914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.253857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.253884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.254971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.998 [2024-07-25 20:04:28.254996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.998 qpair failed and we were unable to recover it. 00:34:18.998 [2024-07-25 20:04:28.255098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.255971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.255998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.256925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.256950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.257885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.257911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.258911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.258936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.259034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.259065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.259160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.259184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.259310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.259336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.259469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.259495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.259622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.259648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:18.999 [2024-07-25 20:04:28.259753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.999 [2024-07-25 20:04:28.259779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:18.999 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.259901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.259927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.260955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.260980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.261964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.261989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.262929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.262953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.263896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.263922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.264019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.000 [2024-07-25 20:04:28.264043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.000 qpair failed and we were unable to recover it. 00:34:19.000 [2024-07-25 20:04:28.264162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.264186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.264331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.264356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.264472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.264496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.264593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.264618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.264743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.264767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.264885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.264910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.265887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.265912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.266924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.266950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.267923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.267947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.268039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.268073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.268174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.268198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.268317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.268345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.268465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.268490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.268620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.268645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.001 [2024-07-25 20:04:28.268777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.001 [2024-07-25 20:04:28.268804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.001 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.268955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.268980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.269937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.269961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.270892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.270917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.271906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.271999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.272130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.272288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.272448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.272595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.272742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.002 [2024-07-25 20:04:28.272889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.002 [2024-07-25 20:04:28.272914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.002 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.273940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.273965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.274123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.274280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.274456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.274600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.274727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.274857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.274995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.275812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.275974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.276175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.276329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.276458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.276613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.276772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.276925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.276951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.003 qpair failed and we were unable to recover it. 00:34:19.003 [2024-07-25 20:04:28.277960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.003 [2024-07-25 20:04:28.277987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.278094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.278121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.278271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.278300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.278453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.278481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.278614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.278640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.278734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.278761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.278892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.278918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.279914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.279940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.280918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.280943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.281897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.281925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.282913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.282938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.283033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.283057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.004 qpair failed and we were unable to recover it. 00:34:19.004 [2024-07-25 20:04:28.283193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.004 [2024-07-25 20:04:28.283217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.283343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.283368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.283468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.283494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.283595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.283619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.283717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.283742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.283865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.283889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.283981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.284138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.284314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.284484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.284612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.284754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.284879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.284904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.285908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.285933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.286966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.286993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.287101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.287126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.287259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.287285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.287421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.287447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.287550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.287574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.287669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.287694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.287822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.287849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.288012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.288039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.288167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.288192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.005 qpair failed and we were unable to recover it. 00:34:19.005 [2024-07-25 20:04:28.288284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.005 [2024-07-25 20:04:28.288309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.288405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.288429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.288551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.288576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.288678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.288702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.288807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.288832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.288939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.288979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.289134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.289281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.289426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.289550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.289702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.289863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.289983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.290163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.290334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.290462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.290583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.290707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.290861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.290900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.291922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.291947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.292118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.292249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.292412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.292573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.292728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.292859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.292993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.293023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.293152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.293177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.293270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.293294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.293398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.293422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.006 [2024-07-25 20:04:28.293528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.006 [2024-07-25 20:04:28.293554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.006 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.293684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.293709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.293837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.293865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.293981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.294167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.294299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.294450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.294607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.294759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.294936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.294962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.295865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.295892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.296861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.296885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.297006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.297031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.297136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.297161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.297258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.297283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.297398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.297422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.297540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.297566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.007 [2024-07-25 20:04:28.297725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.007 [2024-07-25 20:04:28.297757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.007 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.297885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.297911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.298949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.298974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.299936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.299981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.300972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.300997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.301129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.301154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.301264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.301289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.301414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.301438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.301541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.301568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.301693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.301718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.301868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.301894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.302023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.302049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.302156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.302182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.302309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.302336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.302447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.302473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.008 qpair failed and we were unable to recover it. 00:34:19.008 [2024-07-25 20:04:28.302605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.008 [2024-07-25 20:04:28.302630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.302734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.302758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.302884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.302908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.303915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.303942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.304887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.304914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.305853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.305880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.306924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.306949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.307135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.307174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.307312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.307353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.009 [2024-07-25 20:04:28.307452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.009 [2024-07-25 20:04:28.307479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.009 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.307586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.307612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.307710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.307736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.307863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.307888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.308900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.308939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.309088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.309235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.309394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.309575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.309724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.309882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.309983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.310143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.310297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.310447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.310594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.310788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.310945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.310970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.311128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.311250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.311372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.311524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.311667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.311839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.311974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.312002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.312116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.312157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.312258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.312283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.312412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.312436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.010 [2024-07-25 20:04:28.312536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.010 [2024-07-25 20:04:28.312561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.010 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.312661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.312685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.312823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.312848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.312969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.312996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.313159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.313291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.313449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.313592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.313748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.313872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.313979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.314123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.314241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.314424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.314603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.314753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.314876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.314901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.315900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.315926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.316911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.316937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.317107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.317133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.317254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.317279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.317413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.011 [2024-07-25 20:04:28.317451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.011 qpair failed and we were unable to recover it. 00:34:19.011 [2024-07-25 20:04:28.317554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.317580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.317706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.317731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.317893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.317918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.318935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.318960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.319094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.319140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.319265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.319295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.319402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.319428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.319561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.319590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.319757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.319782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.319900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.319926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.320907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.320935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.012 [2024-07-25 20:04:28.321845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.012 qpair failed and we were unable to recover it. 00:34:19.012 [2024-07-25 20:04:28.321931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.321955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.322888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.322927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.323895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.323919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.324938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.324964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.325940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.325964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.326965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.326992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.327116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.327142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.013 [2024-07-25 20:04:28.327233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.013 [2024-07-25 20:04:28.327259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.013 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.327359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.327384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.327532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.327559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.327659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.327684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.327775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.327800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.327951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.327978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.328966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.328991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.329865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.329890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.330859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.330884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.331891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.331918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.332050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.332083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.332199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.332224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.332311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.332336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.332456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.332480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.332629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.332653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.014 [2024-07-25 20:04:28.332755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-25 20:04:28.332779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.014 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.332899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.332924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.333892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.333920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.334860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.334887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.335889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.335915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.336839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.336865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.337875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.337899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.338007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.338030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.338127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-25 20:04:28.338152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.015 qpair failed and we were unable to recover it. 00:34:19.015 [2024-07-25 20:04:28.338277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.338305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.338429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.338470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.338610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.338634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.338739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.338763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.338863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.338886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.339971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.339997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.340157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.340337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.340469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.340602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.340724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.340854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.340978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.341892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.341921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.016 [2024-07-25 20:04:28.342971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-25 20:04:28.342996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.016 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.343150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.343299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.343447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.343570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.343725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.343873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.343998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.344037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.344195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.344223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.344330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.344374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.344520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.344548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.344697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.344723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.344823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.344848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.344980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.345927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.345952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.346873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.346898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.347025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.017 [2024-07-25 20:04:28.347057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.017 qpair failed and we were unable to recover it. 00:34:19.017 [2024-07-25 20:04:28.347225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.347252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.347380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.347407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.347507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.347534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.347637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.347663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.347788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.347814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.347941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.347968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.348884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.348910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.349929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.349954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.350935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.350962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.351140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.351263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.351397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.351552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.351680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.351864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.351985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.352015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.352208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.352236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.352365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.352391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.352493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.018 [2024-07-25 20:04:28.352521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.018 qpair failed and we were unable to recover it. 00:34:19.018 [2024-07-25 20:04:28.352679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.352706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.352871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.352897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.352990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.353852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.353988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.354214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.354366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.354497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.354655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.354832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.354965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.354991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.355895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.355920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.356087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.356130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.356281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.356308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.356441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.356472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.356615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.356641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.356760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.356786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.356916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.356942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.357932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.357958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.358092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-25 20:04:28.358130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.019 qpair failed and we were unable to recover it. 00:34:19.019 [2024-07-25 20:04:28.358240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.358265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.358363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.358389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.358546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.358571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.358691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.358718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.358873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.358898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.358996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.359127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.359274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.359405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.359548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.359669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.359826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.359853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.360907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.360949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.361071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.361098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.361225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.361250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.361354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.361381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.361531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.361557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.361708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.361734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.361894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.361920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.362065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.362104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.362236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.362264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.362389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.362425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.362567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.362596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.362726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.362755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.362863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.362892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.020 [2024-07-25 20:04:28.363962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-25 20:04:28.363987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.020 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.364123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.364154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.364279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.364305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.364457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.364485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.364618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.364648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.364753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.364781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.364883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.364913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.365112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.365242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.365415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.365571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.365698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.365868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.365983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.366012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.366147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.366173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.366271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.366297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.366407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.366433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.366568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.366610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.366842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.366870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.367913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.367939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.368090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.368116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.368243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.368268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.368405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.368447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.368593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.368623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.368736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.021 [2024-07-25 20:04:28.368765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.021 qpair failed and we were unable to recover it. 00:34:19.021 [2024-07-25 20:04:28.368894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.368922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.369073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.369117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.369227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.369255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.369379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.369406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.369562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.369591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.369778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.369807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.369910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.369939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.370083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.370244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.370411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.370552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.370728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.370866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.370977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.371128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.371260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.371409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.371555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.371746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.371906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.371935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.372048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.372083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.372214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.372239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.372345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.372370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.372512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.372540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.372670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.372698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.022 [2024-07-25 20:04:28.372822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.022 [2024-07-25 20:04:28.372853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.022 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.372965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.372991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.373148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.373290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.373431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.373574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.373737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.373889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.373991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.374018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.374161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.374200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.374300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.374327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.374430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.374455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.305 [2024-07-25 20:04:28.374584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.305 [2024-07-25 20:04:28.374618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.305 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.374763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.374794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.374962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.374993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.375130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.375157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.375281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.375306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.375422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.375449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.375552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.375579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.375795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.375822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.375937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.375967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.376113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.376146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.376248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.376274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.376400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.376426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.376548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.376577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.376729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.376755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.376864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.376891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.377969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.377997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.378120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.378315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.378485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.378611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.378735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.378860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.378985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.379965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.379990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.380122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-07-25 20:04:28.380148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.306 qpair failed and we were unable to recover it. 00:34:19.306 [2024-07-25 20:04:28.380251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.380276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.380447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.380472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.380586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.380610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.380717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.380742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.380849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.380876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.381967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.381994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.382158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.382314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.382466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.382584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.382720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.382886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.382991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.383947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.383973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.384087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.384113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.384241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.384268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.384434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.384462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.384607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.384632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.384795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.384838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.384979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.385008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.385170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.385194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.385319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.385362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.385514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.385539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.385688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.385712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.385805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-07-25 20:04:28.385829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.307 qpair failed and we were unable to recover it. 00:34:19.307 [2024-07-25 20:04:28.386007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.386193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.386347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.386463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.386609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.386737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.386888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.386919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.387077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.387103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.387227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.387252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.387396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.387423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.387600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.387624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.387732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.387757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.387881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.387905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.388934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.388958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.389939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.389978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.390135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.390174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.390271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.390299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.390435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.390461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.390582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.390610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.390734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.390760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.390856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.390881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.391050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.391090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.391238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.391262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.391377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.391402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.391560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.391584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.391680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-07-25 20:04:28.391704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.308 qpair failed and we were unable to recover it. 00:34:19.308 [2024-07-25 20:04:28.391805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.391829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.391981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.392133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.392258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.392409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.392519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.392692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.392842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.392869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.393030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.393195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.393324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.393481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.393654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.393836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.393987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.394129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.394293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.394464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.394586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.394788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.394933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.394959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.395962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.395987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.396111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.396139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.396272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.396298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.396403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.396429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.396575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.396605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.396761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.396787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.396938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.396963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.397155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.397180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.397304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.397329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.397433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.397458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.309 [2024-07-25 20:04:28.397638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.309 [2024-07-25 20:04:28.397666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.309 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.397813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.397838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.397957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.397982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.398108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.398136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.398262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.398288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.398389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.398415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.398541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.398567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.398692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.398719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.398832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.398861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.399887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.399916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.400098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.400148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.400249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.400276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.400403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.400429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.400557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.400582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.400709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.400734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.400883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.400909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.401855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.401880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.402021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.402091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.310 [2024-07-25 20:04:28.402249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.310 [2024-07-25 20:04:28.402276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.310 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.402384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.402409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.402510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.402536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.402699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.402726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.402845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.402869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.402957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.402982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.403165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.403190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.403301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.403326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.403452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.403477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.403578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.403608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.403711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.403735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.403870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.403894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.404005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.404032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.404204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.404229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.404356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.404396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.404538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.404565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.404716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.404740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.404886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.404914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.405933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.405976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.406133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.406161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.406290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.406317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.406454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.406483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.406610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.406636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.406763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.406789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.406939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.406966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.407103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.407129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.407224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.407248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.407395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.407423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.407606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.407631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.407734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.407774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.407913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.407948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.408069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.408095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.311 qpair failed and we were unable to recover it. 00:34:19.311 [2024-07-25 20:04:28.408225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.311 [2024-07-25 20:04:28.408251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.408374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.408404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.408571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.408596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.408745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.408794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.408908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.408936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.409857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.409884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.410962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.410987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.411173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.411212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.411319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.411348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.411477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.411504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.411654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.411683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.411858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.411883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.412951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.412979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.413157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.413195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.413324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.413351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.413472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.413497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.413623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.413649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.413763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.413788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.413895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.413922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.414088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.312 [2024-07-25 20:04:28.414123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.312 qpair failed and we were unable to recover it. 00:34:19.312 [2024-07-25 20:04:28.414263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.414287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.414444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.414486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.414587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.414615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.414760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.414786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.414925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.414967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.415945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.415974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.416914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.416939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.417083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.417107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.417202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.417226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.417348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.417373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.417494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.417535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.417695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.417723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.417871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.417896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.418905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.418931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.419144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.419290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.419425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.419607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.419766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.313 [2024-07-25 20:04:28.419884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.313 qpair failed and we were unable to recover it. 00:34:19.313 [2024-07-25 20:04:28.419994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.420121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.420244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.420404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.420559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.420707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.420838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.420877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.421039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.421189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.421340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.421513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.421676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.421840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.421986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.422967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.422992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.423178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.423320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.423441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.423555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.423677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.423820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.423982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.424164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.424297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.424445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.424601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.424757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.424945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.424974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.425134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.425160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.314 [2024-07-25 20:04:28.425288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.314 [2024-07-25 20:04:28.425313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.314 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.425459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.425487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.425639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.425663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.425789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.425814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.425960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.425990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.426149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.426176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.426274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.426299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.426399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.426441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.426592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.426618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.426745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.426770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.426923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.426948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.427923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.427948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.428081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.428107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.428219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.428247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.428394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.428419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.428542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.428567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.428732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.428757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.428912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.428937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.429034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.429080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.429199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.429227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.429407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.429432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.429524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.429549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.429731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.429759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.429876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.429901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.430028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.430053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.430160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.430186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.430287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.430313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.430414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.430439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.315 [2024-07-25 20:04:28.430556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.315 [2024-07-25 20:04:28.430581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.315 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.430670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.430695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.430799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.430837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.430971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.431205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.431334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.431462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.431618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.431736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.431953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.431986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.432140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.432168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.432295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.432321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.432486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.432533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.432704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.432729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.432906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.432953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.433975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.433999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.434132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.434158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.434264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.434289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.434433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.434461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.434607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.434633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.434731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.434759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.434888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.434914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.435042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.435077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.435223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.435249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.435422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.435450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.435619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.435645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.435822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.435870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.436038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.436070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.436200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.436225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.436318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.436344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.436461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.436486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.436612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.316 [2024-07-25 20:04:28.436637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.316 qpair failed and we were unable to recover it. 00:34:19.316 [2024-07-25 20:04:28.436810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.436838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.436978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.437141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.437288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.437439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.437620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.437805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.437951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.437979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.438120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.438147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.438270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.438295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.438450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.438478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.438597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.438624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.438715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.438740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.438876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.438903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.439028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.439054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.439211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.439237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.439378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.439407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.439531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.439556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.439714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.439754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.439897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.439927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.440108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.440135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.440235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.440263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.440392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.440446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.440604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.440630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.440773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.440801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.440909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.440938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.441083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.441236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.441379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.441496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.441658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.441858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.441984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.442010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.442166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.442194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.442327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.442370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.442514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.442541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.442667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.442692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.317 [2024-07-25 20:04:28.442848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.317 [2024-07-25 20:04:28.442891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.317 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.443863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.443894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.444936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.444961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.445083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.445111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.445239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.445265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.445391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.445434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.445583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.445613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.445733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.445760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.445860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.445885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.446964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.446989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.447115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.447140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.447264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.447289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.447433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.447461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.447593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.447618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.447714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.447738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.447914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.447945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.448109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.448140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.448247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.448274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.448424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.448453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.448631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.318 [2024-07-25 20:04:28.448657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.318 qpair failed and we were unable to recover it. 00:34:19.318 [2024-07-25 20:04:28.448801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.448840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.448974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.449002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.449158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.449184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.449278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.449303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.449506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.449555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.449696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.449721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.449870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.449895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.450946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.450976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.451120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.451147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.451242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.451267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.451439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.451467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.451613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.451638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.451764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.451806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.451914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.451941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.452093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.452120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.452250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.452275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.452400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.452429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.452556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.452586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.452689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.452714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.452853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.452882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.453111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.453265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.453379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.453525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.453670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.319 [2024-07-25 20:04:28.453850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.319 qpair failed and we were unable to recover it. 00:34:19.319 [2024-07-25 20:04:28.453992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.454017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.454153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.454195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.454352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.454395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.454554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.454583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.454686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.454713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.454823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.454850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.454983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.455868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.455993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.456194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.456349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.456501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.456652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.456781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.456906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.456931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.457087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.457115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.457241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.457267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.457389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.457414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.457546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.457572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.457696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.457721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.457846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.457888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.458081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.458228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.458354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.458531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.458680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.458831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.458979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.459007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.459184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.459210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.459310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.459334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.459428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.459453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.459579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.320 [2024-07-25 20:04:28.459605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.320 qpair failed and we were unable to recover it. 00:34:19.320 [2024-07-25 20:04:28.459733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.459759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.459888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.459914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.460078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.460103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.460200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.460224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.460320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.460362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.460508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.460533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.460656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.460682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.460864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.460892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.461039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.461072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.461203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.461228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.461391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.461419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.461563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.461589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.461738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.461779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.461930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.461972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.462155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.462183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.462351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.462379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.462545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.462593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.462717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.462742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.462845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.462870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.463021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.463049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.463172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.463198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.463337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.463375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.463528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.463577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.463697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.463722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.463842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.463867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.464041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.464189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.464339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.464483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.464660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.464843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.464986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.465148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.465261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.465382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.465535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.465659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-07-25 20:04:28.465779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-07-25 20:04:28.465823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.465967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.465993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.466118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.466144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.466269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.466293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.466396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.466421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.466547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.466572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.466713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.466773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.466914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.466941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.467076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.467102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.467281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.467308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.467459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.467484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.467593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.467618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.467712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.467740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.467899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.467924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.468889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.468915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.469050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.469083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.469184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.469210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.469338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.469364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.469513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.469545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.469693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.469720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.469850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.469891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.470084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.470242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.470359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.470569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.470733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.470856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.470986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.471012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.471165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.471191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.471378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.471427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.471558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.471595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.471729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.471755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-07-25 20:04:28.471868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-07-25 20:04:28.471895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.471992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.472118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.472244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.472395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.472547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.472697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.472874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.472903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.473051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.473081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.473235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.473261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.473404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.473433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.473555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.473581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.473734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.473760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.473898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.473926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.474074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.474231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.474398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.474587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.474739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.474886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.474987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.475946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.475972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.476097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.476123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.476254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.476280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.476410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.476453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.476562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.476605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.476734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.476759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.476878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.476903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.477000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.477025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.477142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.477167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.477269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.477295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.477391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-07-25 20:04:28.477417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-07-25 20:04:28.477516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.477541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.477636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.477662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.477764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.477789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.477914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.477940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.478894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.478920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.479890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.479915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.480849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.480874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-07-25 20:04:28.481869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-07-25 20:04:28.481894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.481996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.482129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.482252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.482370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.482524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.482673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.482811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.482839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.483973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.483998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.484124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.484164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.484292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.484319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.484444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.484470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.484570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.484596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.484713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.484738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.484888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.484914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.485903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.485942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.486950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.486975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.487084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.487110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.487209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.487235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-07-25 20:04:28.487359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-07-25 20:04:28.487387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.487504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.487529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.487633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.487658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.487757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.487782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.487931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.487956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.488869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.488893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.489974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.489999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.490875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.490916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.491844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.491981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.492006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.492104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.492131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.492233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.492258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.492378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-07-25 20:04:28.492404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-07-25 20:04:28.492528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.492553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.492682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.492711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.492882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.492911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.493884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.493977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.494100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.494242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.494386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.494529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.494681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.494861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.494891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.495867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.495980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.496903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.496928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.497048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.497182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.497305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.497451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.497592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-07-25 20:04:28.497734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-07-25 20:04:28.497852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.497880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.497989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.498179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.498296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.498469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.498619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.498756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.498907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.498936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.499870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.499894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.500891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.500984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.501951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.501976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.502108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.502260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.502403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.502535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.502657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-07-25 20:04:28.502808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-07-25 20:04:28.502934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.502959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.503975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.503999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.504856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.504985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.505921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.505947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.506073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.506098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.506214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.506242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.506369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.506394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.506521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.506546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.506674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.506701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.506855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.506880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.507007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.507033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.507141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.507166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.507264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.507290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.507381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.507406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-07-25 20:04:28.507533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-07-25 20:04:28.507560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.507660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.507686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.507777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.507802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.507901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.507929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.508074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.508117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.508211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.508236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.508336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.508361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.508545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.508570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.508713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.508737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.508866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.508893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.509881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.509987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.510949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.510979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.511907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.511932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.512030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.512056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.512188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.512213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.512309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.512334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.512428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.512452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.512577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.512601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-07-25 20:04:28.512731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-07-25 20:04:28.512756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.512853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.512878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.512998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.513174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.513297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.513443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.513574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.513706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.513888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.513914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.514955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.514980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.515934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.515959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.516917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.516945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.517896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.517936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.518071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-07-25 20:04:28.518097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-07-25 20:04:28.518198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.518224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.518375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.518400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.518513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.518538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.518641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.518666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.518753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.518778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.518878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.518905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.519942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.519968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.520948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.520973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.521066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.521091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.521213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.521238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.521338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.521363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.521495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.521520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.521645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.521673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.521826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.521852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.522053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.522254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.522408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.522544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.522702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.522847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.522996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.523021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.523130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.523156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.523283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.523309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.523430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.523455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.523560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.523589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.523715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-07-25 20:04:28.523740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-07-25 20:04:28.523835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.523860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.523962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.523987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.524936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.524963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.525093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.525120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.525218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.525245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.525384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.525410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.525560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.525590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.525695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.525724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.525847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.525871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.526925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.526951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.527927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.527953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.528055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.528094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.528218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.528243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.528340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-07-25 20:04:28.528365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-07-25 20:04:28.528487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.528512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.528668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.528693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.528821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.528846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.528967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.528992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.529848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.529877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.530032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.530211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.530384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.530565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.530705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.530844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.530993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.531138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.531283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.531433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.531579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.531752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.531964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.531993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.532179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.532306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.532451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.532576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.532713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.532877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.532979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.533007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.533153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.533179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.533320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.533358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.533507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.533557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.533676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.533719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.533892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.533935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.534038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.534072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-07-25 20:04:28.534192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-07-25 20:04:28.534221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.534389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.534432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.534602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.534650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.534825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.534870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.535026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.535052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.535176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.535220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.535342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.535385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.535544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.535583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.535731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.535779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.535885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.535912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.536041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.536090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.536221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.536246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.536340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.536365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.536510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.536538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.536727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.536790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.536938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.536969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.537085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.537131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.537295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.537339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.537485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.537512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.537628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.537654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.537804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.537830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.537951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.537976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.538116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.538145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.538303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.538331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.538466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.538510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.538660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.538704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.538857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.538882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.539943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.539968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.540074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.540101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.540248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.540291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.540445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.540477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.540594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.540624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-07-25 20:04:28.540787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-07-25 20:04:28.540815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.540916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.540944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.541051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.541100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.541225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.541250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.541381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.541426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.541544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.541573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.541771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.541799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.541899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.541927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.542072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.542217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.542331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.542487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.542739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.542876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.542997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.543188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.543352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.543505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.543633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.543785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.543915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.543942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.544086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.544126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.544261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.544303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.544566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.544616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.544748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.544800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.544932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.544961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.545108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.545135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.545252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.545280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.545464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.545506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.545617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.545657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.545807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.545831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.545983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.546007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.546133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.546159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.546268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.546295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.546476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.546518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.546714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.546762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.546886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.546912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.547040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-07-25 20:04:28.547075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-07-25 20:04:28.547205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.547235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.547377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.547406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.547555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.547583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.547757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.547807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.547948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.547975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.548140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.548167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.548267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.548292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.548433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.548462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.548663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.548710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.548816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.548844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.548942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.548971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.549091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.549116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.549268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.549293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.549474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.549502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.549695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.549748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.549900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.549930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.550056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.550088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.550213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.550238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.550336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.550362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.550503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.550530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.550734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.550761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.550902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.550930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.551031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.551065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.551216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.551241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.551409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.551438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.551541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.551569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.551761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.551791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.551948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.551973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.552100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.552130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.552259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.552284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.552409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.552458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.552557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.552585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.552764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.552819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.552928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.552955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.553064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.553090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.553271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.553315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.553465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.337 [2024-07-25 20:04:28.553494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.337 qpair failed and we were unable to recover it. 00:34:19.337 [2024-07-25 20:04:28.553624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.553666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.553841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.553889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.554011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.554037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.554202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.554245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.554368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.554397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.554549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.554598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.554765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.554810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.554957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.554984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.555122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.555161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.555297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.555324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.555504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.555557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.555776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.555823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.555958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.555989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.556163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.556201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.556380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.556410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.556578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.556625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.556748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.556798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.556956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.556995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.557114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.557153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.557282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.557309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.557492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.557543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.557733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.557792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.557923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.557951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.558120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.558159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.558308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.558346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.558463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.558509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.558636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.558661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.558822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.558870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.558998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.559024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.559180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.559210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.559377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.559406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.559577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.559630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.338 qpair failed and we were unable to recover it. 00:34:19.338 [2024-07-25 20:04:28.559833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.338 [2024-07-25 20:04:28.559881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.560946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.560974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.561127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.561299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.561493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.561631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.561766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.561897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.561995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.562149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.562288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.562457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.562637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.562759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.562920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.562958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.563070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.563097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.563204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.563230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.563350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.563408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.563539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.563590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.563809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.563861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.563997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.564026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.564183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.564211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.564358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.564400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.564548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.564591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.564716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.564761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.564886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.564912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.565038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.565070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.565189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.565218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.565352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.565380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.565550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.565600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.565775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.565801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.565898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.339 [2024-07-25 20:04:28.565924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.339 qpair failed and we were unable to recover it. 00:34:19.339 [2024-07-25 20:04:28.566073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.566111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.566300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.566343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.566524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.566581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.566753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.566783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.566953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.566979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.567083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.567109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.567234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.567278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.567424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.567467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.567624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.567670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.567763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.567789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.567913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.567939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.568885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.568912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.569925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.569950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.570077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.570103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.570247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.570276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.570436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.570479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.570603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.570632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.570782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.570807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.570896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.570920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.571029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.571076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.571226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.571268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.571377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.571406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.571569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.571594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.571744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.571770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.571869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.571894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.340 qpair failed and we were unable to recover it. 00:34:19.340 [2024-07-25 20:04:28.572047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.340 [2024-07-25 20:04:28.572080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.572173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.572197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.572323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.572355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.572490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.572519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.572684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.572734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.572886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.572936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.573079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.573110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.573238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.573277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.573408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.573439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.573581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.573610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.573781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.573832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.573979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.574006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.574105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.574131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.574303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.574346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.574514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.574575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.574727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.574780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.574905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.574930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.575046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.575109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.575273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.575315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.575436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.575466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.575576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.575604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.575709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.575739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.575875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.575904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.576041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.576076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.576213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.576239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.576370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.576414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.576560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.576603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.576779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.576823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.576967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.577006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.577145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.577176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.577306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.577349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.577486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.577522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.577687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.577740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.577860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.577885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.578004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.578030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.578167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.578196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.578293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.578335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.341 [2024-07-25 20:04:28.578504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.341 [2024-07-25 20:04:28.578552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.341 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.578727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.578773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.578941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.578994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.579116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.579142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.579267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.579293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.579413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.579441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.579607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.579635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.579758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.579797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.579929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.579960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.580104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.580131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.580225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.580250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.580377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.580402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.580517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.580572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.580726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.580754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.580863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.580893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.581064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.581090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.581215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.581240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.581346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.581385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.581560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.581588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.581688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.581716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.581845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.581873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.582007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.582038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.582205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.582244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.582394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.582423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.582567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.582595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.582758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.582807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.582949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.582978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.583136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.583161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.583268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.583293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.583403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.583431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.583595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.583646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.583774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.583823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.583962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.583993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.584149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.584176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.584277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.584304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.584467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.584517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.342 qpair failed and we were unable to recover it. 00:34:19.342 [2024-07-25 20:04:28.584720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.342 [2024-07-25 20:04:28.584759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.584908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.584936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.585052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.585087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.585215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.585239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.585338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.585363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.585509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.585537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.585673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.585700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.585838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.585868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.586008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.586036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.586169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.586195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.586323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.586365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.586501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.586529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.586676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.586710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.586838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.586867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.587074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.587224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.587370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.587559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.587722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.587864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.587993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.588021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.588192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.588231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.588370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.588409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.588553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.588583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.588773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.588826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.588975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.589000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.589141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.589168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.589266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.589292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.589421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.589446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.589570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.589613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.589731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.589761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.589989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.590016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.590144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.590170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.590273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.343 [2024-07-25 20:04:28.590298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.343 qpair failed and we were unable to recover it. 00:34:19.343 [2024-07-25 20:04:28.590440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.590469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.590631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.590660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.590774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.590806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.590970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.590998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.591121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.591147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.591261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.591286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.591438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.591462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.591633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.591661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.591769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.591797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.592001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.592030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.592157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.592183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.592279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.592304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.592442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.592509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.592780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.592830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.592959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.592986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.593089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.593114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.593235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.593260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.593408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.593437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.593635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.593697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.593804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.593832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.593988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.594015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.594153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.594179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.594280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.594305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.594512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.594551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.594738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.594785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.594934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.594959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.595087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.595113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.595210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.595236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.595358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.595383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.595511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.595540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.595757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.595807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.595945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.595974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.596158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.596184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.596307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.596332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.596423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.596449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.596605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.596634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.596785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.596828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.344 qpair failed and we were unable to recover it. 00:34:19.344 [2024-07-25 20:04:28.596997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.344 [2024-07-25 20:04:28.597025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.597198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.597224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.597354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.597379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.597472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.597497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.597648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.597678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.597836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.597864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.598027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.598209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.598374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.598542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.598682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.598848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.598993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.599162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.599325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.599443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.599616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.599777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.599914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.599941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.600084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.600124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.600260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.600287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.600415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.600446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.600624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.600652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.600771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.600813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.600980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.601155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.601302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.601478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.601623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.601783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.601947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.601975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.602158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.602183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.602308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.602334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.602458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.602483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.602638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.602679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.602836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.602863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.602994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.603019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.603183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.345 [2024-07-25 20:04:28.603211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.345 qpair failed and we were unable to recover it. 00:34:19.345 [2024-07-25 20:04:28.603311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.603337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.603461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.603486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.603660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.603688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.603804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.603830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.603916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.603942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.604147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.604263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.604438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.604583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.604725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.604848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.604994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.605154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.605304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.605456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.605630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.605778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.605964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.605989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.606112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.606137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.606307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.606335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.606446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.606474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.606603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.606628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.606754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.606779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.606927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.606961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.607085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.607110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.607210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.607234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.607410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.607438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.607580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.607605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.607726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.607751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.607872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.607899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.608017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.608042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.608202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.608228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.608365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.608408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.608588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.608615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.608748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.608774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.608903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.608929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.609076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.609106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-07-25 20:04:28.609222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-07-25 20:04:28.609247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.609343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.609369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.609493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.609518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.609690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.609719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.609832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.609861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.609975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.610001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.610129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.610155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.610327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.610355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.610478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.610503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.610622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.610647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.610793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.610824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.610978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.611128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.611313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.611477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.611609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.611767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.611924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.611949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.612118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.612147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.612297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.612323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.612421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.612446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.612568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.612593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.612735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.612762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.612932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.612957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.613075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.613117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.613221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.613248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.613421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.613450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.613592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.613620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.613759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.613789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.613950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.613976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.614077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.614103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.614217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.614246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-07-25 20:04:28.614371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-07-25 20:04:28.614396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.614553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.614578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.614722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.614752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.614879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.614920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.615026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.615054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.615177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.615203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.615326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.615352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.615505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.615531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.615711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.615740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.615884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.615910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.616065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.616227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.616348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.616526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.616709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.616861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.616993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.617018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.617201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.617230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.617375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.617401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.617535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.617576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.617740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.617769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.617908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.617934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.618028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.618053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.618218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.618246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.618424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.618451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.618624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.618652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.618785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.618814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.618932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.618958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.619110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.619153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.619297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.619328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.619451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.619476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.619568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.619594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.619747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.619775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.619919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.619944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.620091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.620122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.620222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.620250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.620378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.620404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.620558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-07-25 20:04:28.620600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-07-25 20:04:28.620696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.620724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.620922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.620951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.621088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.621132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.621258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.621284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.621405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.621431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.621551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.621593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.621737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.621762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.621913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.621939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.622084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.622113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.622225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.622254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.622424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.622450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.622579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.622621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.622716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.622745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.622879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.622905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.623065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.623107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.623245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.623275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.623423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.623450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.623602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.623644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.623794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.623820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.623942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.623968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.624096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.624251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.624369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.624526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.624677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.624850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.624975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.625019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.625177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.625203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.625329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.625356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.625521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.625550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.625715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.625746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.625871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.625897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.626050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.626089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.626238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.626266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.626412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.626438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.626528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.626554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.626701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.626736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.626938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.626967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.627122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.627149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-07-25 20:04:28.627253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-07-25 20:04:28.627280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.627411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.627438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.627544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.627586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.627751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.627780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.627921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.627947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.628097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.628245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.628447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.628569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.628721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.628874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.628974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.629130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.629277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.629404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.629607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.629748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.629900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.629926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.630082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.630258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.630403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.630591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.630738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.630864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.630973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.631103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.631229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.631433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.631573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.631751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.631939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.631965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.632091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.632118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.632245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.632272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.632393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.632422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.632542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.632567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.632690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.632715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.632862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.632891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.633067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.633097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.633245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.633273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.633374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.633402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.633544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-07-25 20:04:28.633571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-07-25 20:04:28.633701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.633726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.633854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.633880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.634890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.634917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.635087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.635117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.635289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.635318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.635439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.635465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.635609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.635634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.635726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.635752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.635905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.635931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.636074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.636103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.636271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.636300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.636413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.636439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.636561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.636587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.636724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.636753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.636914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.636942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.637113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.637139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.637263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.637289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.637405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.637444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.637595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.637640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.637788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.637835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.637991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.638017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.638173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.638200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.638331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.638359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.638488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.638514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.638666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.638692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.638837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.638866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.638991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.639120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.639267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.639470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.639632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.639795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.639958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.639987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.640095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.640140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-07-25 20:04:28.640273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-07-25 20:04:28.640299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.640407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.640436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.640558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.640588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.640721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.640750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.640853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.640897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.640998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.641023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.641129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.641155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.641247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.641272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.641425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.641453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.641613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.641641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.641807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.641858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.642015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.642040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.642154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.642192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.642348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.642374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.642512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.642557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.642658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.642685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.642857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.642901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.643045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.643199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.643395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.643526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.643689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.643854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.643990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.644019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.644196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-07-25 20:04:28.644222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-07-25 20:04:28.644331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.644359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.644490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.644519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.644649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.644677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.644843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.644887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.645009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.645035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.645220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.645249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.645425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.645453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.645610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.645655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.645752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.645778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.645909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.645937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.646085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.646130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.646268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.646301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.646433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.646462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.646563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.646592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.646731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.646760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.646977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.647125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.647292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.647455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.647601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.647774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.647954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.647980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.648131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.648253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.648375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.648499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.648648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.648809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.648973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.649001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.649148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.649187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.649336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.649375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.649565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.649611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.649751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.649793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.649896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.649922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.650042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.650075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.650226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.650269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.650441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.650484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-07-25 20:04:28.650577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-07-25 20:04:28.650602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.650761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.650792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.650927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.650955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.651106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.651133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.651256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.651284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.651430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.651458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.651609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.651634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.651786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.651814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.651977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.652006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.652159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.652186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.652328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.652357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.652478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.652521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.652667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.652696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.652843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.652871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.653023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.653053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.653192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.653218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.653378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.653405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.653548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.653589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.653760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.653799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.653935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.653961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.654115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.654142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.654261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.654290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.654423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.654451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.654615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.654644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.654801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.654830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.654963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.654992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.655162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.655201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.655321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.655350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.655460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.655489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.655622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.655650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.655818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.655876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.656044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.656187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.656385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.656517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.656660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.656826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.656981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.657008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.657133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.657161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.657280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.657325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.657463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.657506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.657658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-07-25 20:04:28.657706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-07-25 20:04:28.657913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.657966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.658123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.658149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.658274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.658300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.658422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.658451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.658562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.658592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.658757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.658785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.658890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.658918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.659041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.659079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.659227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.659252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.659375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.659400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.659498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.659525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.659671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.659700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.659866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.659894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.660027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.660094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.660215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.660254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.660434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.660462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.660625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.660675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.660837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.660888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.661047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.661094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.661261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.661288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.661466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.661509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.661628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.661678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.661854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.661902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.662057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.662098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.662197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.662222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.662338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.662367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.662510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.662565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.662704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.662734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.662852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.662878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.663034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.663069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.663201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.663227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.663345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.663373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.663512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.663541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.663755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.663783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.663945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.663973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.664130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.664158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.664290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.664316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.664428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.664456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.664590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.664619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.664736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.664783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.664938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.664964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.665113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.665140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.665275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-07-25 20:04:28.665301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-07-25 20:04:28.665423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.665452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.665587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.665617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.665757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.665786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.666008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.666036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.666204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.666231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.666333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.666381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.666537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.666567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.666731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.666760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.666866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.666895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.667041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.667072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.667204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.667229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.667370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.667398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.667514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.667555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.667736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.667784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.667946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.667974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.668090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.668116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.668212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.668238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.668387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.668412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.668528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.668558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.668721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.668750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.668859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.668886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.669023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.669047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.669154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.669179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.669318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.669380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.669552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.669582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.669744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.669773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.669879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.669907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.670050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.670086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.670183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.670208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.670333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.670359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.670487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.670512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.670635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-07-25 20:04:28.670677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-07-25 20:04:28.670822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.670852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.670972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.670997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.671141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.671180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.671308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.671349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.671494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.671519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.671623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.671649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.671795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.671822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.671970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.671995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.672110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.672135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.672230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.672254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.672378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.672402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.672521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.672545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.672696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.672729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.672860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.672909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.673954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.673996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.674148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.674175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.674304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.674345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.674499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.674542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.674695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.674725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.674855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.674909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.675091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.675221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.675346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.675510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.675717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.675876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.675998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.676127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.676302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.676441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.676659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.676786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.676971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.676998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.677151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.677177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-07-25 20:04:28.677275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-07-25 20:04:28.677299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.677395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.677420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.677531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.677559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.677686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.677713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.677858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.677901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.678067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.678203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.678356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.678556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.678747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.678864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.678980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.679019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.679193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.679232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.679373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.679439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.679635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.679689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.679911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.679962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.680115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.680140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.680264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.680289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.680442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.680467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.680676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.680728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.680894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.680922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.681953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.681981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.682159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.682197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.682358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.682385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.682485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.682529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.682685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.682710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.682929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.682962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.683082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.683124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.683218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.683243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.683365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.683389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.683496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.683522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.683723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.683753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.683894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.683921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.684047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.684098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.684247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.684275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.684378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.684421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.684587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-07-25 20:04:28.684616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-07-25 20:04:28.684725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.684768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.684895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.684923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.685109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.685238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.685382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.685538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.685697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.685840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.685978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.686153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.686289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.686451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.686594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.686757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.686896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.686925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.687034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.687072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.687222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.687248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.687399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.687429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.687564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.687592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.687732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.687761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.687897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.687923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.688078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.688250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.688436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.688571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.688707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.688857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.688979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.689004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.689127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.689153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.689286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.689317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.689518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.689565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.689699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.689727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.689901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.689930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.690049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.690088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.690218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.690257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.690459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.690502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.690618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.690648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.690775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.690816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.690969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.690997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.691107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.691134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.691276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.691319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-07-25 20:04:28.691486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-07-25 20:04:28.691512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.691676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.691729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.691883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.691909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.692038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.692069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.692211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.692255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.692396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.692425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.692613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.692664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.692791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.692816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.692980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.693122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.693286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.693448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.693602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.693769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.693914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.693939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.694089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.694148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.694261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.694293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.694442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.694470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.694629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.694681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.694851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.694901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.695084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.695125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.695274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.695302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.695446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.695474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.695613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.695641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.695760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.695818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.695985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.696011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.696162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.696207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.696343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.696392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.696542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.696590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.696779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.696805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.696935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.696960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.697151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.697310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.697469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.697651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.697773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.697893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-07-25 20:04:28.697997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-07-25 20:04:28.698024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.698150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.698194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.698364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.698406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.698515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.698545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.698662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.698687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.698819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.698844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.698969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.698995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.699124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.699149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.699303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.699328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.699455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.699480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.699642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.699690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.699813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.699839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.699962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.699987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.700132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.700175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.700319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.700366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.700531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.700574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.700679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.700705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.700858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.700884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.701054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.701099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.701263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.701293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.701439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.701468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.701606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.701636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.701834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.701898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.702090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.702140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.702258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.702287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.702435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.702485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.702663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.702716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.702826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.702856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.702982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.703124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.703267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.703428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.703581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.703727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.703892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.703920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.704081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.704124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.704225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.704252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.704404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.704447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.704589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-07-25 20:04:28.704638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-07-25 20:04:28.704752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.704779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.704924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.704949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.705044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.705077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.705182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.705207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.705370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.705413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.705554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.705584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.705721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.705750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.705895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.705923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.706973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.706999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.707118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.707144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.707274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.707299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.707448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.707475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.707647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.707675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.707803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.707835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.708000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.708027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.708161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.708188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.708363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.708415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.708529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.708557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.708665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.708692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.708809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.708856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.709968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.709993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.710123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.710149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.710263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.710306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.710401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.710426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.710559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.710585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-07-25 20:04:28.710685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-07-25 20:04:28.710711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.710821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.710859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.710974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.711165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.711299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.711449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.711599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.711766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.711911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.711936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.712938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.712970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.713865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.713897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.714044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.714187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.714317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.714493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.714643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.714827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.714975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.715013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.715143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.715182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.715333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.715363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.715598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.715650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.715790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.715838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.715994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.716021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.716127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.716153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.650 [2024-07-25 20:04:28.716280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.650 [2024-07-25 20:04:28.716308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.650 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.716439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.716483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.716620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.716684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.716780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.716805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.716910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.716935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.717892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.717917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.718056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.718101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.718243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.718281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.718425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.718451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.718560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.718588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.718721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.718769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.718893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.718936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.719081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.719107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.719265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.719298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.719437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.719466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.719646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.719695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.719861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.719912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.720053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.720092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.720263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.720289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.720411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.720441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.720557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.720592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.720778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.720831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.720955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.720981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.721139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.721164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.721272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.721301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.721469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.721513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.721679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.721732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.721858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.721883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.722005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.722031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.722199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.722241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.722387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.722417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.651 qpair failed and we were unable to recover it. 00:34:19.651 [2024-07-25 20:04:28.722570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.651 [2024-07-25 20:04:28.722611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.722770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.722829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.722955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.722980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.723137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.723179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.723298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.723327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.723439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.723467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.723574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.723602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.723702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.723731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.723875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.723900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.724033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.724057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.724187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.724212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.724362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.724392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.724601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.724652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.724819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.724865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.725002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.725029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.725170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.725196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.725343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.725401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.725517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.725547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.725682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.725711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.725844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.725906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.726956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.726982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.727078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.727105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.727229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.727255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.727366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.727403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.727509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.727539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.727658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.727683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.727846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.727875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.728014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.728039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.728180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.728206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.728313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.728339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.728504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.728532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.652 [2024-07-25 20:04:28.728635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.652 [2024-07-25 20:04:28.728664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.652 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.728804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.728832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.728944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.728972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.729076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.729122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.729225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.729251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.729347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.729390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.729514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.729543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.729674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.729703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.729900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.729929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.730068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.730095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.730234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.730259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.730378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.730407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.730545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.730574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.730737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.730765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.730874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.730903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.731901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.731929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.732120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.732277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.732409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.732560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.732705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.732838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.732977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.733173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.733322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.733507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.733629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.733807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.733968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.733997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.734173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.734212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.734324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.734356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.653 qpair failed and we were unable to recover it. 00:34:19.653 [2024-07-25 20:04:28.734457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.653 [2024-07-25 20:04:28.734485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.734613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.734639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.734772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.734801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.734909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.734939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.735069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.735126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.735234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.735260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.735367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.735392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.735567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.735617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.735755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.735797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.735911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.735942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.736089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.736242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.736432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.736554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.736705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.736842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.736980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.737130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.737284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.737440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.737584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.737753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.737922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.737952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.654 [2024-07-25 20:04:28.738949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.654 [2024-07-25 20:04:28.738977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.654 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.739132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.739290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.739429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.739584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.739731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.739876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.739984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.740169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.740294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.740411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.740577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.740739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.740896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.740938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.741104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.741273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.741413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.741575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.741740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.741874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.741982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.742141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.742316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.742439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.742629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.742787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.742936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.742960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.743959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.743984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.744093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.744132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.744275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.744303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.744426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.744454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-07-25 20:04:28.744589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-07-25 20:04:28.744616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.744750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.744778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.744874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.744902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.745933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.745975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.746927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.746952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.747881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.747909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.748879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.748905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-07-25 20:04:28.749928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-07-25 20:04:28.749961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.750943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.750969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.751919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.751947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.752137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.752292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.752421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.752586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.752740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.752892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.752998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.753129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.753306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.753436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.753563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.753725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.753869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.753904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.754891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.754917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.755010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.755036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.755178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.755204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.755306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-07-25 20:04:28.755332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-07-25 20:04:28.755432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.755458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.755593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.755619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.755746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.755772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.755874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.755900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.756893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.756919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.757879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.757904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.758974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.758999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.759965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.759996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.760118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.760143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.760269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.760293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.760390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.760416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.760507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.760531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.760630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-07-25 20:04:28.760656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-07-25 20:04:28.760757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.760785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.760893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.760919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.761855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.761979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.762902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.762927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.763970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.763998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.764160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.764185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.764313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.764340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.764501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.764543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.764671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.764698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.764795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.764820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.764948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.764973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.765074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-07-25 20:04:28.765101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-07-25 20:04:28.765206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.765232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.765361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.765387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.765486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.765512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.765617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.765643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.765776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.765801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.765899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.765924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.766907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.766931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.767956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.767983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.768894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.768934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.769912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.769937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.770066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.770092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-07-25 20:04:28.770191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-07-25 20:04:28.770216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.770315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.770342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.770464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.770490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.770641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.770691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.770810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.770835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.770940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.770968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.771889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.771914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.772916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.772941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.773950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.773975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.774967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.774995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.775135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.775161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.775256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.775283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.775412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.775437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-25 20:04:28.775532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-07-25 20:04:28.775557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.775679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.775704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.775823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.775853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.775967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.775993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.776957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.776982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.777140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.777270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.777426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.777561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.777722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.777855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.777977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.778913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.778939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.779887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.779913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.780064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.780094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.780244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.780271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.780394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.780423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.780523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.780548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.780686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-07-25 20:04:28.780712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-25 20:04:28.780810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.780834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.780958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.780983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.781143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.781272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.781430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.781549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.781702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.781860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.781994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.782897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.782935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.783868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.783894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.784971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.784997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.785928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.785953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-07-25 20:04:28.786068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-07-25 20:04:28.786106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.786218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.786244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.786343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.786369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.786505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.786532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.786670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.786695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.786813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.786840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.786972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.786999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.787127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.787152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.787249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.787274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.787370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.787395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.787495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.787520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.787671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.787714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.787874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.787901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.788898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.788924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.789890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.789916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.790008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.790039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.790197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.790223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.790322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.790348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.790449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.790475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.790566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-07-25 20:04:28.790592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-07-25 20:04:28.790695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.790720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.790837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.790862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.790966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.790991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.791084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.791111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.791211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.791237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.791368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.791394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.791521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.791546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.791675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.791701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.791823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.791853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.792006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.792049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.792191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.792217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.792327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.792354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.792505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.792548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.792689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.792731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.792885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.792909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.793866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.793891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.794854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.794880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.795859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.795891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.796031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-07-25 20:04:28.796072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-07-25 20:04:28.796216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.796241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.796364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.796388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.796516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.796542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.796667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.796692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.796812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.796838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.796967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.796995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.797140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.797178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.797293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.797320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.797417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.797443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.797584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.797612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.797738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.797763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.797860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.797885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.798039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.798226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.798397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.798545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.798680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.798847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.798962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.799142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.799263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.799423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.799568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.799791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.799932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.799956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.800123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.800273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.800402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.800547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.800692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.800864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.800974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.801004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.801137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.801163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.801278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.801306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.801438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.801492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.801623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.801651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-07-25 20:04:28.801752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-07-25 20:04:28.801780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.801914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.801941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.802075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.802100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.802230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.802255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.802380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.802422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.802552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.802606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.802719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.802750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.802870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.802897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.803069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.803222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.803346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.803497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.803633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.803803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.803961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.804004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.804162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.804189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.804294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.804339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.804492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.804535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.804666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.804694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.804825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.804853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.804975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.805135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.805259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.805391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.805566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.805732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.805871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.805902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.806069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.806096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.806197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.806223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.806324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.806373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.806490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.806532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.806697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.806725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.806852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.806894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.807038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.807069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.807168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.807194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.807348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.807374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.807507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.807532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.807656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.807684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-07-25 20:04:28.807820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-07-25 20:04:28.807850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.807993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.808147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.808269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.808430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.808577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.808756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.808888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.808916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.809945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.809972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.810148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.810264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.810382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.810554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.810722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.810888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.810999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.811026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.811173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.811198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.811297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.811322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.811445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.811472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.811635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.811662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.811827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.811854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.811986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.812014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.812141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.812166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.812288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.812313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.812424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.812451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.812569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.812611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-07-25 20:04:28.812742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-07-25 20:04:28.812791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.812958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.812986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.813121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.813147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.813241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.813266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.813404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.813432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.813571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.813599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.813710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.813738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.813855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.813883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.814082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.814138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.814245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.814272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.814430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.814474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.814622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.814652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.814762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.814790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.814977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.815005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.815170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.815196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.815291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.815316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.815452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.815480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.815603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.815659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.815826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.815854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.815997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.816122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.816240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.816388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.816531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.816713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.816876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.816903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.817022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.817071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.817224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.817257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.817386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.817412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.817535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.817563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.817705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.817734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.817931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.817987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.818146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.818172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.818285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.818313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.818478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.818505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.818678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.818727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.818874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.818921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-07-25 20:04:28.819054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-07-25 20:04:28.819094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.819231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.819256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.819346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.819388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.819554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.819582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.819759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.819803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.819954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.819979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.820117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.820157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.820270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.820299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.820447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.820490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.820635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.820678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.820830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.820870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.821905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.821935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.822887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.822912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.823056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.823101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.823234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.823261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.823412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.823458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.823682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.823732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.823856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.823882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.823982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.824149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.824300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.824444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.824615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.824737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.824931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-07-25 20:04:28.824959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-07-25 20:04:28.825115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.825143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.825251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.825279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.825406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.825433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.825617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.825662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.825799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.825843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.825976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.826005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.826163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.826189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.826293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.826325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.826507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.826536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.826644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.826672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.826833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.826861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.826985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.827140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.827272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.827408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.827631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.827758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.827929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.827958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.828105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.828131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.828261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.828286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.828400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.828429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.828581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.828612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.828779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.828808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.829029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.829063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.829208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.829233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.829333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.829358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.829466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.829494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.829631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.829659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.829775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.829804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.830036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.830074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.830195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.830220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.830364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.830392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.830518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.830560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.830727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.830755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.830881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.830914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.831034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.831066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-07-25 20:04:28.831193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-07-25 20:04:28.831218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.831360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.831388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.831513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.831555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.831721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.831749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.831891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.831918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.832042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.832073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.832209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.832234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.832378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.832406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.832520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.832545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.832669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.832697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.832826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.832868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.833039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.833072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.833196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.833222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.833365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.833392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.833546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.833573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.833711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.833739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.833838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.833866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.834033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.834084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.834223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.834250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.834343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.834368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.834543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.834586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.834723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.834751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.834894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.834940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.835101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.835278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.835419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.835598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.835753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.835895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.835998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.836154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.836340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.836468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.836606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.836824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.836952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.836980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.837134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.837162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-07-25 20:04:28.837263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-07-25 20:04:28.837288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.837385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.837412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.837571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.837597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.837771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.837814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.837912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.837937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.838089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.838135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.838309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.838337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.838506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.838563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.838745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.838796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.838960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.838988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.839115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.839141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.839281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.839309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.839426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.839451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.839573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.839600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.839789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.839839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.839977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.840138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.840326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.840492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.840631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.840791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.840949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.840977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.841156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.841269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.841392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.841534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.841696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.841829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.841981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.842006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.842125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.842164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.842331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.842358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-07-25 20:04:28.842501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-07-25 20:04:28.842544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.842715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.842760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.842887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.842913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.843069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.843112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.843231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.843276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.843450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.843494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.843597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.843623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.843869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.843920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.844010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.844036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.844184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.844227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.844336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.844366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.844470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.844504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.844641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.844670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.844810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.844841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.845008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.845035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.845165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.845191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.845338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.845381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.845540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.845567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.845708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.845751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.845904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.845930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.846097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.846126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.846261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.846289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.846450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.846493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.846635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.846680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.846807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.846833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.846978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.847017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.847199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.847229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.847369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.847397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.847564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.847613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.847726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.847755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.847922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.847950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.848118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.848269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.848441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.848598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.848730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.848877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-07-25 20:04:28.848986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-07-25 20:04:28.849012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.849158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.849196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.849356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.849386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.849554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.849583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.849716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.849744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.849848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.849876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.850068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.850112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.850264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.850292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.850401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.850443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.850554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.850585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.850786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.850816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.850946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.850972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.851111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.851147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.851274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.851301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.851406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.851454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.851599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.851628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.851846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.851872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.852021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.852049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.852182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.852207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.852352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.852380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.852511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.852539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.852719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.852760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.852905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.852931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.853023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.853048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.853206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.853231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.853393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.853454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.853570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.853612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.853748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.853778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.853923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.853951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.854090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.854117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.854244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.854270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.854518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.854566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.854779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.854832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.854968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.854997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.855110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.855136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-07-25 20:04:28.855263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-07-25 20:04:28.855288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.855489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.855545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.855651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.855692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.855828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.855857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.856025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.856055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.856197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.856236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.856408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.856450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.856665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.856721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.856950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.857126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.857289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.857462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.857613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.857758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.857924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.857953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.858071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.858097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.858246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.858272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.858445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.858486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.858609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.858654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.858786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.858815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.859007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.859049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.859216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.859242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.859373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.859398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.859525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.859551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.859703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.859756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.859892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.859920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.860030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.860068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.860244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.860270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.860418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.860447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.860592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.860617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.860768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.860796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.860926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.860954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.861108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.861134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-07-25 20:04:28.861227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-07-25 20:04:28.861252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.861347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.861391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.861532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.861560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.861696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.861723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.861866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.861894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.862051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.862118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.862237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.862276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.862441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.862483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.862690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.862740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.862952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.863003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.863146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.863172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.863281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.863306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.863483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.863534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.863799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.863870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.864011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.864039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.864183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.864222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.864376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.864414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.864599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.864650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.864773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.864803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.864939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.864964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.865090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.865117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.865274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.865299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.865431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.865456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.865608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.865634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.865760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.865786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.865911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.865937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.866113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.866144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.866290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.866318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.866468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.866527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.866660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.866717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.866848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.866873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.866998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.867023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.867130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.867156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.867276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.867301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.867499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.867526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.867664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.867694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-07-25 20:04:28.867800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-07-25 20:04:28.867828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.867962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.867987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.868112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.868137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.868254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.868282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.868421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.868453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.868601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.868658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.868840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.868885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.868996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.869022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.869161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.869206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.869342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.869385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.869530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.869572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.869775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.869827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.869924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.869949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.870045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.870077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.870246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.870289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.870432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.870461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.870598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.870624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.870742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.870768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.870900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.870926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.871965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.871990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.872131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.872174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.872318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.872348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.872519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.872562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.872682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.872712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-07-25 20:04:28.872852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-07-25 20:04:28.872878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.872986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.873012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.873165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.873209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.873321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.873364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.873480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.873509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.873680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.873706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.873824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.873850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.873977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.874004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.874157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.874201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.874392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.874424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.874683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.874733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.874864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.874889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.874990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.875126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.875276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.875459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.875616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.875755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.875919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.875958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.876095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.876123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.876272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.876315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.876455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.876498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.876652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.876695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.876827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.876852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.876961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.876987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.877117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.877143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.877264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.877292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.877440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.877467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.877700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.877752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.877886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.877913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.878038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.878072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.878212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.878251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.878364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.878408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.878605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.878634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.878770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.878800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.878935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-07-25 20:04:28.878963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-07-25 20:04:28.879076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.879120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.879220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.879245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.879356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.879384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.879548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.879576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.879714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.879742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.879854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.879888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.880052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.880089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.880239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.880265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.880382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.880412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.880557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.880586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.880730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.880774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.880911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.880939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.881074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.881117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.881252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.881278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.881398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.881440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.881577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.881607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.881746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.881775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.881905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.881934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.882084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.882123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.882264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.882291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.882418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.882461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.882598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.882626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.882826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.882854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.882967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.882994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.883101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.883143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.883267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.883291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.883401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.883429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.883536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.883563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.883727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.883755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.883860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.883888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.884033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.884067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.884213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.884239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.884358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.884406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.884530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.884559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.884762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.884790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.884943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.884968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.885093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.885119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-07-25 20:04:28.885218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-07-25 20:04:28.885243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.885358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.885386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.885521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.885549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.885729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.885786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.885893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.885919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.886905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.886932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.887071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.887099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.887256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.887281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.887431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.887456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.887643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.887690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.887854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.887881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.888013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.888040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.888212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.888240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.888378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.888405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.888551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.888578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.888724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.888753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.888871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.888902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.889910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.889935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.890030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.890055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.890240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.890268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.890390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.890449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.890616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.890643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.890750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.890777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.890903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.890931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-07-25 20:04:28.891083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-07-25 20:04:28.891110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.891263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.891305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.891450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.891497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.891646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.891688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.891806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.891831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.891958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.891983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.892130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.892160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.892303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.892332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.892483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.892511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.892612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.892640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.892778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.892805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.892943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.892970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.893094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.893122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.893247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.893296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.893477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.893518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.893689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.893751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.893879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.893905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.894890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.894914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.895043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.895080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.895194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.895237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.895357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.895386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.895553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.895596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.895723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.895749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.895879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.895905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.896940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.896964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-07-25 20:04:28.897065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-07-25 20:04:28.897090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.897187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.897211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.897432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.897544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.897578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.897716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.897762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.897891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.897916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.898042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.898077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.898254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.898301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.898473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.898502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.898666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.898693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.898828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.898856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.898989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.899014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.899166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.899191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.899342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.899372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.899497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.899539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.899679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.899706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.899844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.899871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.900053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.900179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.900335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.900491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.900683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.900846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.900992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.901154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.901279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.901446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.901654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.901797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.901972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.901999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.902130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-25 20:04:28.902159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-07-25 20:04:28.902265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.902289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.902410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.902435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.902558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.902582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.902756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.902780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.902909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.902951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.903088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.903128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.903236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.903261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.903418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.903443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.903588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.903615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.903756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.903783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.903948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.903974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.904087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.904126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.904279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.904304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.904433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.904457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.904576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.904603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.904705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.904732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.904865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.904892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.905052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.905201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.905378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.905514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.905658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.905889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.905995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.906173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.906319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.906446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.906653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.906805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.906943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.906971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.907119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.907144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.907257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.907283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.907415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.907442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.907557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.907584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.907781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.907809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.907946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.907974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.908125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-07-25 20:04:28.908150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-07-25 20:04:28.908297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.908324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.908463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.908491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.908628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.908655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.908838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.908898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.909066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.909094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.909250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.909276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.909419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.909462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.909640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.909682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.909794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.909842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.909964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.909989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.910100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.910127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.910268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.910312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.910457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.910505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.910678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.910721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.910823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.910850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.910986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.911135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.911301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.911462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.911588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.911751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.911914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.911939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.912064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.912088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.912208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.912233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.912360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.912387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.912553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.912581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.912688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.912716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.912885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.912931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.913065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.913092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.913223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.913248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.913367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.913396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-07-25 20:04:28.913560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-07-25 20:04:28.913588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.913750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.913777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.913882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.913922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.914097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.914230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.914392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.914566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.914704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.914832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.914995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.915159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.915316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.915447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.915590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.915784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.915934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.915962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.916112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.916136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.916243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.916270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.916380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.916406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.916541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.916567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.916744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.916772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.916908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.916935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.917041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.917075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.917238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.917266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.917373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.917400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.917529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.917556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.917725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.917775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.917881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.917908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.918067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.918093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.918226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.918251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.918426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.918469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.918620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.918667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.918779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-07-25 20:04:28.918808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-07-25 20:04:28.918975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.919115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.919248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.919418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.919585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.919773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.919937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.919964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.920082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.920108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.920260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.920285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.920421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.920447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.920633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.920661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.920795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.920823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.920959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.920986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.921107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.921133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.921264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.921289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.921406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.921445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.921579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.921606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.921803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.921830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.921954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.921981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.922102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.922127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.922265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.922296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.922439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.922467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.922581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.922608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.922743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.922770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.922870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.922898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.923007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.923034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.923174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.923213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.923321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.923348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.923487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.923531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.923706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.923752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.923879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.923904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.924027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.924052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.924215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.924241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.924383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.924410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.924596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.924621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.924774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.924801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.924945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-07-25 20:04:28.924973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-07-25 20:04:28.925121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.925146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.925277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.925301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.925429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.925471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.925580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.925607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.925724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.925765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.925927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.925954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.926078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.926104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.926244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.926271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.926417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.926445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.926592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.926617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.926804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.926828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.926936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.926961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.927117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.927143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.927245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.927270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.927417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.927441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.927567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.927594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.927757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.927784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.927896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.927924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.928057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.928237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.928390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.928521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.928694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.928832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.928978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.929006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.929157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.929196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.929357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.929384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.929533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.929577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.929685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.929714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.929907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.929954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.930088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.930116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.930246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.930273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.930450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.930493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.930620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.930667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.930826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.930852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.930974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.930999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.931151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.931194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-07-25 20:04:28.931324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-07-25 20:04:28.931362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.931466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.931494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.931610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.931636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.931769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.931796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.931943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.931967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.932065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.932090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.932235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.932262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.932399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.932426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.932563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.932590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.932761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.932807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.932933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.932959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.933066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.933093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.933265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.933294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.933453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.933496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.933615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.933659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.933783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.933812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.933977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.934161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.934308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.934459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.934594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.934766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.934945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.934971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.935078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.935105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.935249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.935295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.935468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.935495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.935644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.935687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.935803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.935834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.935965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.935990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.936128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.936156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.936291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.936318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.936429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.936456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-07-25 20:04:28.936594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-07-25 20:04:28.936621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.936759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.936786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.936925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.936952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.937088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.937129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.937274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.937300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.937408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.937434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.937566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.937593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.937719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.937764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.937910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.937946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.938090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.938117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.938263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.938304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.938430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.938473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.938600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.938626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.938738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.938782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.938909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.938936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.939104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.939133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.939298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.939342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.939504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.939530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.939631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.939657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.939790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.939817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.939983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.940145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.940294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.940491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.940657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.940795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.940964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.940991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.941128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.941153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.941270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.941300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.941409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.941437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.941585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.941613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.941748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.941774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-07-25 20:04:28.941908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-07-25 20:04:28.941935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.942118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.942230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.942385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.942546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.942696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.942868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.942980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.943007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.943171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.943196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.943324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.943348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.943521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.943548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.943691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.943718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.943881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.943909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.944044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.944086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.944228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.944254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.944423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.944461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.944601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.944629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.944834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.944861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.944998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.945028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.945167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.945192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.945332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.945360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.945521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.945549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.945674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.945698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.945852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.945880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.946034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.946163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.946319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.946446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.946655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.946833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.946972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.947000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.947144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.947173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.947312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.691 [2024-07-25 20:04:28.947339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.691 qpair failed and we were unable to recover it. 00:34:19.691 [2024-07-25 20:04:28.947479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.947506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.947668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.947703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.947818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.947846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.948903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.948931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.949035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.949078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.949197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.949222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.949342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.949371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.949473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.949501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.949652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.949709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.949860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.949906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.950018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.950044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.950171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.950215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.950355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.950398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.950572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.950614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.950840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.950896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.951047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.951093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.951268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.951311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.951521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.951574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.951741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.951786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.951888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.951919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.952074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.952101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.952225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.952267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.952427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.952453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.952610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.952654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.952782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.952807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.952912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.952937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.953038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.953066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.953232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.953258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-25 20:04:28.953387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.692 [2024-07-25 20:04:28.953413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.953555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.953580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.953683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.953708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.953850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.953892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.954969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.954993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.955120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.955146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.955242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.955266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.955414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.955441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.955607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.955635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.955742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.955769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.955873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.955913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.956039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.956235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.956399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.956541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.956687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.956860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.956975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.957166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.957329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.957489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.957648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.957822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.957967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.957992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.958106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.958132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.958232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.958256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.958358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.958383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.958510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.958536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.958641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.693 [2024-07-25 20:04:28.958668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-25 20:04:28.958772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.958798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.958937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.958961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.959974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.959999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.960147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.960175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.960374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.960405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.960541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.960568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.960712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.960739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.960878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.960905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.961088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.961127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.961298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.961325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.961512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.961542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.961651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.961681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.961799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.961827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.961976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.962002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.962107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.962134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.962237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.962263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.962426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.962456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.962624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.962653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.962798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.962826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.962988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.963014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.963167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.963205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.963333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.963369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.963554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.963583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.963752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.963780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.963888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.963917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.964022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.964056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.964190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.964216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.964340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.964374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.964543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.964568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.964720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.964747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-25 20:04:28.964875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.694 [2024-07-25 20:04:28.964916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.965122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.965269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.965411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.965572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.965734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.965881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.965989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.966120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.966270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.966445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.966606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.966775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.966939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.966966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.967132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.967157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.967255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.967280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.967406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.967430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.967553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.967578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.967751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.967780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.967923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.967948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.968130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.968155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.968255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.968281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.968391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.968415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.968548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.968588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.968740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.968766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.968904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.968931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.969959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.695 [2024-07-25 20:04:28.969985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-25 20:04:28.970141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.970167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.970266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.970291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.970417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.970445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.970553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.970580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.970753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.970781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.970890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.970917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.971070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.971112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.971205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.971230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.971379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.971404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.971522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.971552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.971657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.971698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.971864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.971891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.972017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.972042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.972186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.972211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.972322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.972362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.972537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.972573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.972723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.972753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.972895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.972921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.973889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.973913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.974893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.974918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.696 [2024-07-25 20:04:28.975032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.696 [2024-07-25 20:04:28.975067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.696 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.975195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.975238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.975343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.975368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.975520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.975545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.975642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.975673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.975774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.975799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.975909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.975936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.976050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.976100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.976211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.976238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.976362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.976391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.976514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.976542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.976711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.976738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.976867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.976894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.977937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.977966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.978112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.978247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.978398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.978581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.978706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.978878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.978992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.979018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.979158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.979182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.979288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.979312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.979507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.979531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.979683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.979724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.979859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.979888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.980055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.980089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.980230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.980259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.980379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.980407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.980550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.980600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.980735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.980763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.697 [2024-07-25 20:04:28.980901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.697 [2024-07-25 20:04:28.980929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.697 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.981077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.981120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.981220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.981247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.981336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.981362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.981541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.981567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.981810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.981838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.981939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.981976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.982113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.982145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.982244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.982270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.982367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.982393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.982566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.982593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.982790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.982817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.982917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.982945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.983102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.983129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.983232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.983257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.983391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.983423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.983576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.983604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.983723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.983766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.983930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.983958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.984103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.984147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.984246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.984272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.984412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.984437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.984590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.984617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.984767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.984796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.984897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.984925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.985050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.985087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.985233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.985271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.985408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.985445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.985618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.985647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.985809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.985852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.985955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.985982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.986108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.986135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.986236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.986262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.986362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.986388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.986513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.698 [2024-07-25 20:04:28.986539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.698 qpair failed and we were unable to recover it. 00:34:19.698 [2024-07-25 20:04:28.986640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.986667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.986793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.986820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.986971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.986996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.987131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.987158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.987313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.987342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.987507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.987550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.987680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.987705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.987837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.987863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.988001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.988027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.988178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.988223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.988342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.988370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.988565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.988609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.988735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.988761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.988867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.988893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.989017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.989044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.989156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.989183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.989314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.989356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.989506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.989537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.989734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.989768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.989883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.989908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.990011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.990036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.990214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.990258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.990431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.990462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.990647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.990697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.990844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.990871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.699 [2024-07-25 20:04:28.991010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.699 [2024-07-25 20:04:28.991036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.699 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.991164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.991203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.991355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.991384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.991556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.991584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.991696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.991729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.991827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.991855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.991969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.991994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.992118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.992144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.992270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.992295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.992438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.992466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.992609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.992636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.992736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.992763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.992940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.992996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.993117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.993144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.993265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.993299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.993472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.993516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.993627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.993678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.993846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.993872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.993991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.994126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.994289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.994464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.994638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.994815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.994942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.994967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.995122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.995151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.995292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.995336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.995462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.995505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.995613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.995639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.995766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.995792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.995945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.995971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.996152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.996282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.996442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.996598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.996724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.996857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.996984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.997009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.997155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.997181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.997289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.997315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.700 qpair failed and we were unable to recover it. 00:34:19.700 [2024-07-25 20:04:28.997450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.700 [2024-07-25 20:04:28.997494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.997650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.997678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.997784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.997811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.997970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.997998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.998159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.998186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.998288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.998315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.998512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.998541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.998688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.998715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.998832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.998859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.998993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.999159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.999291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.999484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.999661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.999790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:28.999959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:28.999999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.000145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.000183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.000290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.000317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.000464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.000507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.000687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.000720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.000864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.000890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.000994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.001020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.001138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.001163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.001280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.001305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.001448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.001474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.001631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.001657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.001799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.001826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.001973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.002000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.002183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.002208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.002330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.002356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.002535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.002562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.002727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.002754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.002885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.002927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.003133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.003251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.003384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.003539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.003700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.003860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.003992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.701 [2024-07-25 20:04:29.004019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.701 qpair failed and we were unable to recover it. 00:34:19.701 [2024-07-25 20:04:29.004146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.004171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.004266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.004295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.004490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.004526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.004643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.004688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.004824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.004851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.005935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.005963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.006111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.006136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.006289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.006314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.006437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.006465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.006621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.006649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.006788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.006817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.006924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.006951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.007947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.007972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.008097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.008122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.008246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.008272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.008447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.008474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.008614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.008642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.008766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.008792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.008966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.009005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.009146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.009174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.009291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.009334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.009478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.009521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.009695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.009725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.009871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.009897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.010022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.010051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.010203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.010229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.010344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.010373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.010510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.010555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.010783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.010829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.010947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.010972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.011081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.011109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.011205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.702 [2024-07-25 20:04:29.011230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.702 qpair failed and we were unable to recover it. 00:34:19.702 [2024-07-25 20:04:29.011357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.011383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.011514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.011541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.011655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.011683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.011795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.011835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.011961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.011985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.012949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.012974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.013137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.013259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.013380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.013570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.013723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.013876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.013977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.014102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.014248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.014390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.014522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.014681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.014850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.014878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.015028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.015075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.015180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.015205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.015356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.015399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.015545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.015588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.015722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.015765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.015891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.015917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.016071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.016098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.016222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.016267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.016410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.016438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.016702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.016756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.016853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.016880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.017040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.017087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.017204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.017233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.017391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.017435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.017602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.017631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.017795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.017820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.017945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.017971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.018179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.018224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.018378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.703 [2024-07-25 20:04:29.018420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.703 qpair failed and we were unable to recover it. 00:34:19.703 [2024-07-25 20:04:29.018570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.018614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.018718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.018745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.018842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.018867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.018978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.019003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.019160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.019205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.019347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.019393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.019508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.019552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.019706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.019732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.019870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.019896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.019998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.020023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.020164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.020193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.020356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.020405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.020501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.020527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.020654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.020679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.020804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.020831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.020983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.021009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.021171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.021215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.021373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.021401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.021536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.021580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.021711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.021737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.021861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.021887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.021977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.022175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.022339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.022517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.022648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.022809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.022963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.022989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.023112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.023140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.023304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.023333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.023450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.023491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.023662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.023690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.023859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.023904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.024041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.024078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.024198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.024222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.024382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.024410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.024540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.024568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.024669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.024697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.024855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.024886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.025029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.025055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.025197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.025222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.025364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.704 [2024-07-25 20:04:29.025393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.704 qpair failed and we were unable to recover it. 00:34:19.704 [2024-07-25 20:04:29.025539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.025582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.025681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.025707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.025855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.025881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.026924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.026949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.027085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.027111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.027221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.027247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.027424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.027468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.027645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.027689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.027819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.027845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.027969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.027995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.028131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.028177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.028301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.028348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.028517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.028560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.028663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.028693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.028843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.028870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.028998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.029024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.029221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.029251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.029389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.029424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.029592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.029626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.029759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.029784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.029953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.029978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.030144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.030171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.030291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.030316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.030458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.030500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.030599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.030628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.030728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.030756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.030868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.030896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.031074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.031118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.031223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.031248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.031426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.031453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.031557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.031585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.705 qpair failed and we were unable to recover it. 00:34:19.705 [2024-07-25 20:04:29.031696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.705 [2024-07-25 20:04:29.031724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.031853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.031893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.032075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.032102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.032208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.032234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.032360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.032412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.032538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.032568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.032714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.032749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.032876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.032902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.033027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.033052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.033244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.033288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.033449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.033494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.033669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.033711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.033845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.033871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.033976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.034128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.034285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.034436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.034560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.034721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.034870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.034895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.035048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.035079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.035245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.035273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.035421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.035451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.035588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.035631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.035755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.035781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.035910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.035937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.036119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.036149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.036265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.036292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.036390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.036420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.036585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.036612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.036732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.036758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.036920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.036945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.037897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.037922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.038077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.038104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.038232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.038258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.038391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.038417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.038511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.038536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.038673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.038698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.706 qpair failed and we were unable to recover it. 00:34:19.706 [2024-07-25 20:04:29.038819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.706 [2024-07-25 20:04:29.038844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.038950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.038975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.039111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.039138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.039286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.039314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.039479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.039507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.039625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.039653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.039798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.039827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.039958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.039986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.040147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.040174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.040316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.040360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.040519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.040545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.040752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.040796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.040941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.040967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.041928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.041958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.042144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.042174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.042307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.042335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.042477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.042504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.042648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.042676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.042808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.042836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.042974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.043001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.043167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.043195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.043357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.043385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.043528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.043556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.043695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.043723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.043865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.043893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.044944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.044971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.045150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.045179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.045331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.045360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.045526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.045570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.045715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.045759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.045882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.045908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.046038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.046076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.707 [2024-07-25 20:04:29.046200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.707 [2024-07-25 20:04:29.046244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.707 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.046390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.046434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.046552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.046601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.046757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.046783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.046913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.046939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.047072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.047204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.047352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.047504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.047699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.047856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.047985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.048167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.048293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.048452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.048640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.048803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.048948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.048975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.049125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.049150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.049264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.049291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.049405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.049432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.049544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.049572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.049703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.049729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.049899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.049927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.050048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.050078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.050205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.050229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.708 [2024-07-25 20:04:29.050332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.708 [2024-07-25 20:04:29.050357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.708 qpair failed and we were unable to recover it. 00:34:19.996 [2024-07-25 20:04:29.050502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-07-25 20:04:29.050530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.050666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.050693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.050798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.050825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.050942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.050966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.051091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.051232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.051411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.051584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.051729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.051868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.051996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.052946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.052971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.053125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.053172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.053318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.053360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.053477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.053521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.053618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.053644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.053747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.053773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.053875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.053901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.054944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.054972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.055084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.055126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.055221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.055245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.055342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.055366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.055493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.055520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.055654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.055680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-07-25 20:04:29.055826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-07-25 20:04:29.055853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.055997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.056142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.056257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.056405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.056638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.056777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.056948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.056977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.057126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.057151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.057280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.057305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.057464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.057491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.057654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.057681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.057789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.057816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.057955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.057982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.058157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.058197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.058334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.058360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.058504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.058548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.058700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.058743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.058886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.058928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.059055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.059090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.059254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.059283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.059423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.059450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.059557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.059585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.059723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.059749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.059885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.059912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.060045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.060078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.060225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.060249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.060377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.060418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.060522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.060549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.060702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.060729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.060843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.060885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-07-25 20:04:29.061970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-07-25 20:04:29.061997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.062168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.062194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.062363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.062391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.062518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.062546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.062732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.062760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.062892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.062920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.063951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.063979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.064106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.064130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.064283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.064308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.064525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.064553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.064739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.064766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.064906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.064933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.065070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.065111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.065238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.065263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.065410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.065437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.065541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.065569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.065737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.065764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.065894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.065929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.066044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.066079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.066224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.066249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.066370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.066398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.066552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.066580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.066708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.066735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.066866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.066905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.067067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.067113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.067267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.067314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.067431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.067461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.067621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.067664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.067793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.067819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.067926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.067952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.068066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.068092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.068237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-07-25 20:04:29.068282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-07-25 20:04:29.068419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.068446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.068599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.068642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.068794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.068820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.068962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.068989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.069148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.069177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.069320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.069347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.069490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.069517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.069645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.069804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.069831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.069977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.070191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.070333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.070498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.070666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.070801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.070945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.070969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.071102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.071128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.071267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.071294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.071438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.071464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.071571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.071599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.071735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.071763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.071901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.071928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.072062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.072090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.072208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.072232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.072334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.072358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.072517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.072543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.072728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.072755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.072986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.073177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.073353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.073489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.073650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.073788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.073948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.073976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.074093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.074134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.074299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.074337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.074467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.074511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.074707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-07-25 20:04:29.074756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-07-25 20:04:29.074907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.074932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.075057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.075089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.075266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.075309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.075454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.075516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.075686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.075715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.075858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.075884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.076856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.076998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.077022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.077195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.077220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.077381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.077410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.077550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.077577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.077711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.077739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.077882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.077910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.078043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.078077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.078198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.078222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.078369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.078393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.078561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.078588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.078716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.078743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.078889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.078913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.079098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.079123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.079228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.079252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.079397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.079425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.079559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.079586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.079722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.079750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.079886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.001 [2024-07-25 20:04:29.079913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.001 qpair failed and we were unable to recover it. 00:34:20.001 [2024-07-25 20:04:29.080069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.080096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.080241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.080266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.080398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.080426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.080566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.080593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.080757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.080784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.080926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.080954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.081092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.081239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.081382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.081574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.081739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.081879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.081988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.082158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.082275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.082441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.082554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.082718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.082878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.082918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.083046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.083078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.083257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.083301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.083427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.083470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.083566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.083592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.083736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.083764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.083931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.083957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.084124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.084151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.084282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.084308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.084448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.084474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.084608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.084635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.084763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.084789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.084881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.084906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.002 [2024-07-25 20:04:29.085901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.002 [2024-07-25 20:04:29.085925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.002 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.086050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.086084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.086217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.086243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.086384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.086410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.086541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.086567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.086723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.086749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.086876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.086903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.087969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.087995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.088172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.088216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.088385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.088428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.088550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.088579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.088749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.088775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.088926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.088952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.089088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.089114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.089246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.089272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.089392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.089418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.089547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.089573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.089725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.089750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.089907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.089932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.090065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.090092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.090236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.090280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.090426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.090469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.090629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.090656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.090786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.090813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.090910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.090934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.091054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.091184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.091337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.091528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.091664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.091802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.091976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.092000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.003 qpair failed and we were unable to recover it. 00:34:20.003 [2024-07-25 20:04:29.092124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.003 [2024-07-25 20:04:29.092148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.092250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.092275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.092448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.092474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.092624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.092673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.092788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.092815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.092948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.092974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.093089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.093115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.093238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.093262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.093410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.093437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.093566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.093593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.093703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.093730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.093867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.093895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.094971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.094998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.095130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.095155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.095256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.095281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.095384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.095409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.095552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.095580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.095767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.095795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.095929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.095957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.096093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.096136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.096241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.096266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.096407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.096434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.096576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.096604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.096735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.096759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.096875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.096903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.097090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.097116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.097220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.097244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.097370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.097394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.097528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.097555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.097760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.097789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.097943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.097968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.098123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.004 [2024-07-25 20:04:29.098148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.004 qpair failed and we were unable to recover it. 00:34:20.004 [2024-07-25 20:04:29.098270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.098294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.098389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.098415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.098542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.098566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.098718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.098747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.098913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.098941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.099085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.099110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.099215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.099245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.099375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.099400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.099541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.099569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.099756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.099784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.099921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.099948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.100081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.100106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.100257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.100282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.100392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.100419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.100522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.100549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.100685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.100713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.100817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.100844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.101001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.101041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.101190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.101218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.101345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.101370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.101526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.101573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.101766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.101816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.101948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.101975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.102124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.102169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.102307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.102332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.102445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.102473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.102631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.102659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.102798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.102824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.102920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.102946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.103084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.103110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.103242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.103267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.103362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.103387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.103517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.103541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.103701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.103731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.103857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.103882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.104007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.104031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.104167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.104193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.104341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.005 [2024-07-25 20:04:29.104369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.005 qpair failed and we were unable to recover it. 00:34:20.005 [2024-07-25 20:04:29.104499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.104526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.104660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.104688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.104789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.104816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.104992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.105020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.105141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.105168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.105281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.105326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.105450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.105493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.105642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.105668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.105792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.105819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.105983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.106139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.106261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.106393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.106545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.106674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.106850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.106875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.107946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.107972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.108080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.108118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.108211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.108236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.108365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.108390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.108563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.108591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.108756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.108784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.108919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.108947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.109072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.006 [2024-07-25 20:04:29.109108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.006 qpair failed and we were unable to recover it. 00:34:20.006 [2024-07-25 20:04:29.109225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.109250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.109342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.109368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.109477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.109504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.109614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.109642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.109783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.109826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.109964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.109992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.110165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.110191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.110322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.110365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.110524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.110552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.110694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.110722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.110828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.110856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.111010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.111038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.111166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.111192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.111320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.111361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.111509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.111533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.111712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.111740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.111901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.111928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.112109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.112134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.112259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.112288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.112467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.112494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.112669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.112697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.112840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.112868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.113889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.113913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.114124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.114149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.114235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.114260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.114382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.114409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.114574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.114602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.114747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.114775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.114884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.114912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.115084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.115126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.115226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.115252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.007 [2024-07-25 20:04:29.115360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.007 [2024-07-25 20:04:29.115388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.007 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.115530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.115558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.115720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.115748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.115848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.115875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.116907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.116936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.117079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.117122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.117285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.117311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.117498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.117542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.117718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.117762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.117887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.117912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.118111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.118138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.118267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.118311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.118486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.118532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.118653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.118682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.118819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.118845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.118971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.118996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.119125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.119170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.119293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.119323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.119513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.119556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.119651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.119677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.119788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.119814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.119968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.119994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.120947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.120975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.121114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.121146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.121290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.121339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.121480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.008 [2024-07-25 20:04:29.121524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.008 qpair failed and we were unable to recover it. 00:34:20.008 [2024-07-25 20:04:29.121700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.121748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.121871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.121897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.121999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.122025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.122166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.122193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.122342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.122386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.122528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.122583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.122765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.122790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.122920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.122945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.123083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.123249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.123432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.123594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.123722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.123854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.123979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.124854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.124979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.125135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.125254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.125363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.125540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.125731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.125888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.125916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.126050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.126109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.126286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.126314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.126457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.126484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.126648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.126676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.126812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.126840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.126977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.127001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.127106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.127132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.127260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.127285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.127405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.127447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.127667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.009 [2024-07-25 20:04:29.127717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.009 qpair failed and we were unable to recover it. 00:34:20.009 [2024-07-25 20:04:29.127822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.127850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.127987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.128166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.128319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.128469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.128625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.128781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.128955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.128980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.129075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.129111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.129202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.129227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.129328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.129352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.129509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.129537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.129698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.129725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.129853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.129882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.130011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.130039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.130185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.130210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.130337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.130379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.130531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.130574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.130734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.130762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.130943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.130971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.131161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.131187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.131286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.131311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.131428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.131453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.131578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.131606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.131772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.131800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.131929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.131956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.132101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.132133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.132273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.132313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.132466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.132510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.132626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.132655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.132792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.132818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.132971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.132997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.133097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.133124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.133250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.133276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.133444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.133487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.133575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.010 [2024-07-25 20:04:29.133601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.010 qpair failed and we were unable to recover it. 00:34:20.010 [2024-07-25 20:04:29.133727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.133752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.133878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.133904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.134065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.134243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.134412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.134603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.134728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.134883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.134976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.135002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.135149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.135193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.135337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.135379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.135496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.135526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.135666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.135693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.135823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.135850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.135979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.136129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.136279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.136442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.136611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.136783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.136915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.136943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.137073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.137127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.137222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.137267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.137431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.137459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.137592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.137620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.137724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.137752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.137890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.137918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.138062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.138088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.138223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.138248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.138391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.138462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.138647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.138675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.138878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.138905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.139040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.139074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.139198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.011 [2024-07-25 20:04:29.139223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.011 qpair failed and we were unable to recover it. 00:34:20.011 [2024-07-25 20:04:29.139323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.139348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.139471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.139498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.139639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.139666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.139780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.139812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.139952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.139980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.140092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.140118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.140212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.140237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.140339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.140363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.140565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.140590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.140739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.140766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.140874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.140902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.141048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.141226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.141373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.141538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.141693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.141857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.141976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.142132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.142248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.142409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.142570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.142764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.142932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.142959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.143116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.143142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.143245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.143271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.143423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.143451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.143596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.143623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.143765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.143793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.143959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.143987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.144114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.144140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.144290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.144315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.144448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.144476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.144591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.144619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.144729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.144757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.144891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.144919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.145032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.145065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.145188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.012 [2024-07-25 20:04:29.145213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.012 qpair failed and we were unable to recover it. 00:34:20.012 [2024-07-25 20:04:29.145305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.145334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.145451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.145479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.145645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.145673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.145801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.145845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.145973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.145998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.146103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.146128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.146249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.146274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.146408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.146433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.146522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.146547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.146688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.146716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.146830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.146858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.147917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.147942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.148069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.148119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.148269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.148308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.148494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.148541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.148666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.148696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.148867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.148893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.149016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.149041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.149205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.149249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.149365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.149394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.149558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.149603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.149705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.149737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.149877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.149903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.150040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.150073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.150218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.150262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.150446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.150488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.150626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.150670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.150800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.150826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.150975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.151001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.151143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.151188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.151338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.151381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.151500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.013 [2024-07-25 20:04:29.151543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.013 qpair failed and we were unable to recover it. 00:34:20.013 [2024-07-25 20:04:29.151697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.151722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.151848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.151874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.151996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.152022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.152151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.152195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.152340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.152370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.152514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.152542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.152660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.152688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.152833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.152859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.152982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.153110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.153262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.153396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.153567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.153773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.153924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.153951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.154082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.154124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.154257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.154286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.154412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.154455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.154604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.154629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.154791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.154815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.155009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.155034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.155191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.155216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.155383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.155410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.155509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.155537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.155736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.155764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.155901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.155929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.156070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.156113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.156214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.156239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.156353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.156381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.156509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.156537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.156677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.156705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.156854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.156879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.157091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.157241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.157386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.157553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.157710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.014 [2024-07-25 20:04:29.157879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.014 qpair failed and we were unable to recover it. 00:34:20.014 [2024-07-25 20:04:29.157989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.158153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.158277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.158423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.158568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.158724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.158862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.158890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.159891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.159920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.160053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.160099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.160274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.160312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.160440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.160471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.160613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.160644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.160779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.160809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.160968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.160996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.161153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.161180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.161284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.161310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.161457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.161499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.161643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.161685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.161828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.161872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.162002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.162028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.162179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.162209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.162345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.162373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.162503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.162528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.162675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.162703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.162862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.162913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.163055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.015 [2024-07-25 20:04:29.163091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.015 qpair failed and we were unable to recover it. 00:34:20.015 [2024-07-25 20:04:29.163245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.163282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.163434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.163463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.163603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.163631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.163795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.163824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.163980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.164132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.164257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.164378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.164554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.164701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.164895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.164923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.165030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.165067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.165215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.165241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.165333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.165376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.165498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.165539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.165686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.165715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.165855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.165885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.166021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.166050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.166204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.166230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.166350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.166379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.166524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.166550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.166725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.166754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.166865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.166893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.167010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.167052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.167204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.167230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.167378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.167407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.167543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.167572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.167711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.167739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.167848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.167876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.168011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.168039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.168164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.168190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.168346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.168375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.168525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.168566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.168735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.168764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.168879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.168904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.169004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.169030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.169187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.169214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.016 qpair failed and we were unable to recover it. 00:34:20.016 [2024-07-25 20:04:29.169354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.016 [2024-07-25 20:04:29.169384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.169505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.169531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.169713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.169741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.169874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.169907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.170048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.170084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.170229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.170254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.170398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.170427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.170593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.170622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.170786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.170815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.170923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.170949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.171074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.171100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.171223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.171249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.171393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.171419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.171578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.171607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.171740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.171768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.171943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.171971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.172125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.172152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.172265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.172294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.172428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.172453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.172598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.172627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.172762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.172790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.172902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.172928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.173093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.173119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.173246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.173272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.173402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.173427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.173569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.173597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.173771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.173799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.173914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.173940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.174034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.174065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.174194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.174220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.174349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.174395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.174541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.174567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.174730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.174774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.174953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.174982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.175159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.175185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.175312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.175359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.175503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.175548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.175737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.017 [2024-07-25 20:04:29.175766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.017 qpair failed and we were unable to recover it. 00:34:20.017 [2024-07-25 20:04:29.175901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.175929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.176079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.176105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.176215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.176241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.176354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.176382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.176526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.176556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.176696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.176724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.176836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.176865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.177068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.177225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.177402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.177574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.177724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.177889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.177996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.178184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.178334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.178461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.178637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.178773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.178933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.178959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.179068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.179094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.179197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.179222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.179366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.179395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.179538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.179568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.179702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.179731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.179892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.179921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.180054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.180106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.180231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.180256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.180374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.180415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.180552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.180595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.180753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.180781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.180900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.180926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.181045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.181084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.181184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.181209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.181309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.181335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.181458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.181487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.181618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.181647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.018 qpair failed and we were unable to recover it. 00:34:20.018 [2024-07-25 20:04:29.181782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.018 [2024-07-25 20:04:29.181810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.181957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.181983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.182110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.182137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.182237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.182264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.182426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.182455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.182600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.182629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.182761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.182787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.182904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.182932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.183070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.183113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.183212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.183238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.183336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.183362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.183484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.183510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.183629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.183658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.183821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.183849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.184943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.184968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.185119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.185148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.185321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.185346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.185466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.185492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.185623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.185650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.185805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.185834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.185984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.186131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.186321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.186451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.186634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.186802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.186967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.186995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.187139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.187168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.187309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.187335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.187436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.187465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.187616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.187644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.187781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.019 [2024-07-25 20:04:29.187810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.019 qpair failed and we were unable to recover it. 00:34:20.019 [2024-07-25 20:04:29.187953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.187978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.188082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.188109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.188256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.188285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.188401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.188429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.188585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.188612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.188739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.188781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.188882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.188911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.189014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.189044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.189206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.189232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.189340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.189365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.189485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.189513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.189688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.189717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.189841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.189867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.190021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.190046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.190206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.190248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.190368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.190397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.190522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.190549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.190673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.190699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.190847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.190876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.191026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.191054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.191216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.191241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.191365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.191391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.191553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.191578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.191703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.191728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.191854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.191882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.192856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.192884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.020 qpair failed and we were unable to recover it. 00:34:20.020 [2024-07-25 20:04:29.193005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.020 [2024-07-25 20:04:29.193031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.193173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.193199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.193304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.193329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.193433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.193458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.193561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.193587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.193718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.193747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.193878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.193906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.194964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.194989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.195120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.195147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.195253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.195279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.195408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.195434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.195556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.195582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.195679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.195705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.195837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.195863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.196869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.196990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.197016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.197151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.197178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.197279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.197305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.197487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.197516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.197657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.197685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.197830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.197855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.197980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.198005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.198167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.198198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.198345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.198371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.198493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.198518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.198696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.021 [2024-07-25 20:04:29.198724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.021 qpair failed and we were unable to recover it. 00:34:20.021 [2024-07-25 20:04:29.198861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.198889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.199870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.199899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.200971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.200997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.201935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.201965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.202114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.202139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.202293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.202318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.202480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.202506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.202609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.202634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.202755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.202780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.202876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.202901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.203048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.203235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.203377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.203500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.203677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.203851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.203987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.204016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.204153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.204180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.204303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.022 [2024-07-25 20:04:29.204328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.022 qpair failed and we were unable to recover it. 00:34:20.022 [2024-07-25 20:04:29.204520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.204548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.204675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.204700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.204824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.204849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.205917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.205945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.206078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.206125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.206257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.206283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.206444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.206470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.206575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.206600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.206750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.206790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.206933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.206961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.207072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.207100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.207243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.207268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.207372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.207397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.207556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.207581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.207707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.207732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.207881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.207906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.208908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.208933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.209040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.209073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.209190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.209215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.209323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.209382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.209528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.209559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.209704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.209731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.209865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.209907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.210041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.210086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.210200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.210229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.023 [2024-07-25 20:04:29.210381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.023 [2024-07-25 20:04:29.210407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.023 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.210534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.210561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.210705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.210734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.210869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.210898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.211079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.211199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.211379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.211547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.211695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.211819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.211982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.212010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.212128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.212158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.212302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.212328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.212464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.212496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.212649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.212677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.212821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.212850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.213056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.213093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.213207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.213232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.213366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.213391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.213641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.213690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.213860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.213886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.214069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.214099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.214240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.214268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.214410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.214438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.214595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.214621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.214764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.214794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.214939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.214967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.215115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.215145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.215272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.215298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.215392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.215418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.215552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.215577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.215726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.215755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.215925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.215951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.216047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.024 [2024-07-25 20:04:29.216078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.024 qpair failed and we were unable to recover it. 00:34:20.024 [2024-07-25 20:04:29.216240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.216271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.216375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.216403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.216520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.216546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.216640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.216665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.216813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.216841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.216977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.217172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.217324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.217477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.217603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.217807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.217959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.217985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.218108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.218134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.218284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.218311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.218426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.218451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.218578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.218603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.218725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.218753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.218918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.218945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.219095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.219121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.219218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.219247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.219436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.219461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.219616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.219658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.219771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.219796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.219899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.219923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.220897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.220925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.221054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.221083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.221204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.221229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.221381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.221409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.221551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.221577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.221708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.025 [2024-07-25 20:04:29.221733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.025 qpair failed and we were unable to recover it. 00:34:20.025 [2024-07-25 20:04:29.221829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.221855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.221951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.221976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.222107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.222133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.222235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.222260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.222358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.222383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.222527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.222555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.222703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.222730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.222826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.222852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.223002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.223030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.223173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.223201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.223330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.223357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.223510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.223535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.223703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.223729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.223826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.223853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.224002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.224027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.224153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.224179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.224335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.224391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.224509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.224551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.224681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.224707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.224835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.224879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.225051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.225086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.225228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.225256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.225374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.225400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.225506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.225538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.225662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.225689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.225838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.225867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.226006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.226033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.226177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.226204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.226344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.026 [2024-07-25 20:04:29.226373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.026 qpair failed and we were unable to recover it. 00:34:20.026 [2024-07-25 20:04:29.226540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.226569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.226688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.226714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.226854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.226882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.227042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.227081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.227204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.227231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.227348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.227374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.227538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.227580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.227681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.227709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.227878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.227907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.228025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.228052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.228188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.228230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.228404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.228430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.228553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.228579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.228741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.228767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.228887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.228912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.229083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.229114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.229251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.229279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.229425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.229450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.229576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.229601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.229791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.229816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.229945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.229970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.230134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.230160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.230295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.230320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.230440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.230482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.230609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.230636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.230741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.230766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.230894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.230919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.231078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.231121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.231268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.231298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.231422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.231449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.231580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.231606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.231706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.231732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.231903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.231932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.232067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.232093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.232186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.232216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.232380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.232408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.232542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.027 [2024-07-25 20:04:29.232605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.027 qpair failed and we were unable to recover it. 00:34:20.027 [2024-07-25 20:04:29.232727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.232753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.232849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.232875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.233055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.233089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.233242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.233268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.233365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.233391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.233543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.233585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.233725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.233755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.233859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.233901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.234880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.234909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.235045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.235084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.235212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.235238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.235363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.235390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.235529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.235558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.235696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.235725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.235853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.235879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.236935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.236964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.237095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.237122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.237245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.237271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.237446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.237475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.237638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.237666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.237840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.237865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.237989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.238031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.238182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.238210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.238341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.238369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.238493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.238520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.028 [2024-07-25 20:04:29.238670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.028 [2024-07-25 20:04:29.238702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.028 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.238829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.238855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.239037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.239072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.239248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.239273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.239421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.239450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.239556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.239585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.239689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.239719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.239901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.239927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.240072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.240101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.240236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.240264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.240372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.240402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.240575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.240601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.240731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.240756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.240882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.240907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.241880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.241905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.242082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.242125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.242226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.242254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.242384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.242410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.242536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.242578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.242717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.242745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.242908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.242937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.243083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.243109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.243285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.243314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.243490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.243519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.243662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.243688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.029 [2024-07-25 20:04:29.243812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.029 [2024-07-25 20:04:29.243838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.029 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.243963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.243988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.244147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.244176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.244316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.244344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.244492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.244518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.244619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.244644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.244794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.244819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.244960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.244988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.245109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.245135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.245236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.245266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.245410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.245440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.245632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.245657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.245761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.245786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.245876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.245902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.246113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.246270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.246414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.246564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.246769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.246896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.246996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.247021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.247123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.247150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.247304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.247329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.247455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.247483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.247639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.247664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.247816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.247841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.248033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.248082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.248221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.248246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.248346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.248372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.248524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.248566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.248677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.248704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.248874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.248902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.030 qpair failed and we were unable to recover it. 00:34:20.030 [2024-07-25 20:04:29.249936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.030 [2024-07-25 20:04:29.249961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.250100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.250129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.250259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.250284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.250437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.250462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.250651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.250676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.250773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.250798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.250901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.250926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.251048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.251232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.251358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.251487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.251669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.251853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.251983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.252167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.252344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.252532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.252656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.252813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.252936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.252961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.253086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.253129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.253229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.253255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.253409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.253434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.253553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.253596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.253742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.253769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.253925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.253950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.254101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.254126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.254295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.254323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.254428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.254456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.254569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.254597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.254714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.254740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.254893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.254918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.255123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.255267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.255406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.255557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.255725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.031 [2024-07-25 20:04:29.255867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.031 qpair failed and we were unable to recover it. 00:34:20.031 [2024-07-25 20:04:29.255965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.255994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.256114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.256140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.256258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.256286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.256423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.256451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.256617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.256642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.256770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.256810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.256911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.256953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.257056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.257086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.257210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.257236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.257338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.257364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.257520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.257545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.257695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.257719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.257850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.257876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.258038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.258070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.258228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.258257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.258362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.258391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.258560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.258585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.258757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.258785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.258934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.258976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.259955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.259980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.260086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.260114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.260226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.260255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.260399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.260426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.260580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.260606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.260760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.260788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.260930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.260958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.261073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.261099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.261198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.261223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.261346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.261372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.261528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.261553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.261678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.261704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.032 [2024-07-25 20:04:29.261791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.032 [2024-07-25 20:04:29.261816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.032 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.261928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.261956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.262955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.262981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.263142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.263168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.263292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.263317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.263474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.263500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.263594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.263619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.263734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.263762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.263906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.263934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.264087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.264112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.264232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.264257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.264413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.264441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.264542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.264570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.264721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.264747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.264862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.264888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.265921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.265947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.266070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.266096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.266272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.266298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.266423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.266448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.266614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.266640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.266768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.266793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.266928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.266953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.267130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.267170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.267325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.267351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.267520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.267548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.033 [2024-07-25 20:04:29.267656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.033 [2024-07-25 20:04:29.267701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.033 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.267831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.267856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.267949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.267975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.268131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.268283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.268434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.268586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.268717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.268860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.268978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.269007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.269164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.269190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.269321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.269346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.269469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.269494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.269682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.269706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.269858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.269883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.270027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.270055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.270196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.270222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.270321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.270346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.270472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.270498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.270651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.270692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.270862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.270890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.271970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.271995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.272099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.272125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.272221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.272246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.272363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.272388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.272515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.034 [2024-07-25 20:04:29.272541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.034 qpair failed and we were unable to recover it. 00:34:20.034 [2024-07-25 20:04:29.272702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.272730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.272882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.272908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.273069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.273094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.273239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.273268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.273438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.273466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.273622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.273688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.273830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.273854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.273980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.274125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.274297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.274459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.274586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.274735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.274900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.274928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.275091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.275248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.275419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.275584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.275752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.275874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.275997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.276024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.276157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.276183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.276383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.276408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.276580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.276608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.276710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.276738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.276868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.276896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.277082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.277278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.277417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.277564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.277728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.277879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.277997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.278025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.278155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.278184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.278353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.278378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.278507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.278548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.278693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.278718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.278813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.035 [2024-07-25 20:04:29.278839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.035 qpair failed and we were unable to recover it. 00:34:20.035 [2024-07-25 20:04:29.278967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.278992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.279114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.279157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.279266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.279309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.279443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.279468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.279590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.279615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.279770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.279813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.279940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.279968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.280106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.280147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.280246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.280272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.280399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.280424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.280603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.280630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.280769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.280796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.280921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.280964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.281097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.281140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.281270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.281295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.281418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.281446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.281590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.281621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.281748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.281774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.281930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.281955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.282107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.282136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.282285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.282310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.282438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.282463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.282603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.282631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.282790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.282830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.282958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.282983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.283103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.283255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.283387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.283558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.283681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.283868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.283973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.284001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.284123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.284149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.284268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.284293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.284458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.284486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.284653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.284679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.284832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.284857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.036 [2024-07-25 20:04:29.285029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.036 [2024-07-25 20:04:29.285057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.036 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.285173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.285201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.285333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.285360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.285503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.285528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.285655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.285681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.285797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.285826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.285980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.286138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.286284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.286445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.286629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.286776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.286904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.286929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.287073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.287117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.287243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.287268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.287397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.287422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.287542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.287567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.287743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.287771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.287918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.287944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.288832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.288983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.289172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.289289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.289466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.289631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.289797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.289947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.289972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.290121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.290150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.290285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.290313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.290488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.290513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.290603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.290628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.290742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.290770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.037 qpair failed and we were unable to recover it. 00:34:20.037 [2024-07-25 20:04:29.290915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.037 [2024-07-25 20:04:29.290941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.291068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.291093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.291219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.291260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.291400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.291428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.291540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.291569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.291713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.291739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.291884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.291913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.292065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.292094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.292244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.292269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.292383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.292408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.292500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.292525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.292668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.292697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.292835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.292863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.293885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.293913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.294111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.294177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.294329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.294358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.294488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.294530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.294644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.294671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.294808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.294836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.294985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.295011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.295169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.295212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.295356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.295384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.295523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.295551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.295695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.295721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.295876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.295902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.038 [2024-07-25 20:04:29.296970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.038 [2024-07-25 20:04:29.296995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.038 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.297122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.297163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.297297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.297340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.297467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.297492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.297619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.297645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.297771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.297796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.297962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.297987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.298114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.298140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.298240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.298265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.298409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.298434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.298603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.298628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.298731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.298756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.298904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.298933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.299041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.299078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.299215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.299240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.299352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.299380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.299528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.299553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.299672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.299697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.299865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.299893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.300929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.300954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.301140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.301263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.301387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.301533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.301703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.301850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.301979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.302004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.302132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.302158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.039 qpair failed and we were unable to recover it. 00:34:20.039 [2024-07-25 20:04:29.302257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.039 [2024-07-25 20:04:29.302283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.302400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.302429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.302575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.302600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.302702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.302727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.302822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.302847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.302994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.303176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.303302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.303501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.303638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.303783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.303950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.303979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.304123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.304149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.304251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.304275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.304379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.304404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.304503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.304528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.304626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.304668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.304714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9aa390 (9): Bad file descriptor 00:34:20.040 [2024-07-25 20:04:29.304931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.304970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.305106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.305133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.305305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.305350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.305525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.305574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.305700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.305726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.305855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.305882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.306005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.306030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.306182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.306226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.306374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.306416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.306590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.306635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.306761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.306787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.306941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.306967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.307098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.307140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.307294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.307323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.307491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.307519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.307819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.307873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.308017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.308045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.308177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.308202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.308349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.308377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.308487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.308515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.308657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.308685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.308853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.308902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.309056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.309087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-07-25 20:04:29.309191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.040 [2024-07-25 20:04:29.309217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.309336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.309365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.309500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.309543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.309693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.309743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.309898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.309924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.310046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.310077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.310204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.310247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.310362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.310390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.310574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.310617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.310762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.310792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.310936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.310966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.311164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.311193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.311300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.311329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.311494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.311522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.311643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.311669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.311796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.311821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.311918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.311944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.312098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.312125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.312248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.312273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.312422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.312447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.312600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.312626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.312752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.312778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.312893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.312931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.313071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.313115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.313253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.313282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.313421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.313449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.313603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.313655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.313820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.313848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.313961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.313987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.314111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.314137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.314265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.314307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.314419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.314463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.314689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.314741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.314871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.314898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.315865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.315892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-07-25 20:04:29.316016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.041 [2024-07-25 20:04:29.316041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.316143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.316168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.316261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.316291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.316410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.316438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.316606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.316634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.316746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.316774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.316912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.316940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.317121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.317147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.317284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.317312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.317420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.317448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.317582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.317609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.317776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.317821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.317951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.317976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.318116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.318145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.318288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.318313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.318439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.318464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.318594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.318621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.318764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.318790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.318916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.318941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.319043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.319075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.319228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.319253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.319357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.319382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.319532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.319557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.319712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.319739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.319840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.319868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.320004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.320031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.320179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.320210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.320401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.320445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.320590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.320633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.320806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.320849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.320956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.320982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.321125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.321169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.321316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.321345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.321510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.321538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.321676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.321703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.321868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.321895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.322002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.322030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.322157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.322182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.322320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.322345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.322436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.322478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.322645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.322673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-07-25 20:04:29.322806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.042 [2024-07-25 20:04:29.322847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.322942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.322968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.323071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.323098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.323245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.323273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.323405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.323434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.323545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.323573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.323683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.323711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.323849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.323878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.324027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.324055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.324212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.324236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.324341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.324365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.324514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.324542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.324678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.324706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.324825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.324850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.325029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.325056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.325224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.325252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.325375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.325403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.325564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.325591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.325775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.325833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.325986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.326024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.326192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.326219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.326332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.326362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.326498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.326526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.326674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.326699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.326823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.326852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.326993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.327018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.327116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.327141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.327265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.327290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.327542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.327588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.327726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.327758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.327928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.327955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.328081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.328111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.328267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.328312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.328478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.328505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.328716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.328760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.043 [2024-07-25 20:04:29.328893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.043 [2024-07-25 20:04:29.328919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.043 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.329048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.329082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.329206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.329234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.329395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.329437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.329565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.329608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.329738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.329766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.329860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.329886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.330029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.330073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.330210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.330237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.330357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.330382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.330488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.330514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.330679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.330707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.330847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.330875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.331020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.331077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.331205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.331233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.331378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.331421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.331558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.331600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.331853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.331902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.332028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.332236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.332400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.332576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.332713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.332872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.332985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.333013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.333166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.333193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.044 [2024-07-25 20:04:29.333341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.044 [2024-07-25 20:04:29.333384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.044 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.333528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.333575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.333716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.333760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.333889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.333915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.334013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.334040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.334225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.334268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.334414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.334442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.334602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.334644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.334805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.334835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.334962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.334988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.335114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.335140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.335314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.335357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.335468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.335498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.335641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.335684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.335783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.335809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.335962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.335988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.336111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.336137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.336284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.336309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.336433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.336459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.336556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.336582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.336738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.336765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.336892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.336918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.337960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.337985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.338144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.338188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.338329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.338374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.338512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.338556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.338687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.338713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.338810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.338836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.338935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.045 [2024-07-25 20:04:29.338961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.045 qpair failed and we were unable to recover it. 00:34:20.045 [2024-07-25 20:04:29.339099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.339134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.339285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.339328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.339503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.339532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.339713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.339764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.339930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.339957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.340071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.340114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.340243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.340270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.340397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.340422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.340518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.340543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.340692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.340737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.340863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.340889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.341017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.341043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.341191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.341235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.341386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.341429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.341582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.341626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.341746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.341791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.341919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.341945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.342047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.342080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.342197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.342226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.342404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.342457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.342608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.342637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.342814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.342839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.342933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.342958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.343087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.343113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.343206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.343247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.343387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.343415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.343552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.343580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.343691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.343719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.343859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.343890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.344003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.344030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.344187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.046 [2024-07-25 20:04:29.344230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.046 qpair failed and we were unable to recover it. 00:34:20.046 [2024-07-25 20:04:29.344374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.344417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.344589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.344632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.344761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.344805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.344963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.344988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.345153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.345196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.345313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.345342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.345509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.345552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.345698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.345742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.345894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.345920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.346089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.346244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.346392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.346562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.346698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.346837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.346972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.347011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.347184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.347212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.347360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.347403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.347544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.347573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.347708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.347736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.347883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.347909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.348962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.348990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.349094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.349118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.349217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.349241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.349389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.349415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.349518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.349544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.349711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.349739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.349876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.349903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.350034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.350067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.350215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.350239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.350354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.350406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.047 [2024-07-25 20:04:29.350519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.047 [2024-07-25 20:04:29.350562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.047 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.350680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.350723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.350848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.350874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.350970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.350996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.351117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.351266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.351418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.351601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.351755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.351882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.351997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.352035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.352191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.352239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.352357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.352386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.352587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.352632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.352726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.352752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.352855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.352881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.353002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.353027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.353182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.353225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.353369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.353399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.353540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.353569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.353766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.353794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.353908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.353936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.354086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.354141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.048 qpair failed and we were unable to recover it. 00:34:20.048 [2024-07-25 20:04:29.354288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.048 [2024-07-25 20:04:29.354318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.354508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.354557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.354703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.354747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.354882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.354910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.355069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.355205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.355382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.355547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.355683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.355868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.355997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.356155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.356302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.356474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.356615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.356791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.356945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.356970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.357127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.357153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.357281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.357308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.357437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.357465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.357586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.357611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.357759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.357786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.357898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.357924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.358050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.358080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.358224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.358252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.358384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.358411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.358551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.358578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.358682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.358712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.358854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.358881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.359047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.359216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.359401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.359546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.359748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.359899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.359999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.360025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.360147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.360176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.360276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.049 [2024-07-25 20:04:29.360303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.049 qpair failed and we were unable to recover it. 00:34:20.049 [2024-07-25 20:04:29.360419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.360446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.360610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.360638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.360817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.360842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.360966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.360990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.361089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.361116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.361287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.361331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.361482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.361525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.361663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.361731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.361908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.361933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.362067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.362093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.362237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.362279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.362395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.362438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.362573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.362614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.362732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.362757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.362887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.362912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.363924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.363948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.364076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.364118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.364231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.364258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.364395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.364422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.364611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.364654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.364793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.364837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.364938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.050 [2024-07-25 20:04:29.364964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.050 qpair failed and we were unable to recover it. 00:34:20.050 [2024-07-25 20:04:29.365132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.365177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.365302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.365345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.365468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.365495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.365622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.365648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.365747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.365772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.365902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.365928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.366971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.366995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.367112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.367137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.367265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.367290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.367430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.367457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.367615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.367643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.367782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.367809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.367919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.367955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.368112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.368138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.368266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.368290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.368416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.368440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.368570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.368594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.368771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.368798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.368929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.368956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.369119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.369148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.369286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.369314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.369470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.369512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.369649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.369676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.369818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.369846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.369964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.369990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.370120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.370145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.370244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.370268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.370395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.370420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.370565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.370594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.370731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.370758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.370894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.051 [2024-07-25 20:04:29.370921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.051 qpair failed and we were unable to recover it. 00:34:20.051 [2024-07-25 20:04:29.371067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.371108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.371245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.371270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.371377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.371401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.371528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.371552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.371668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.371696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.371903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.371931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.372072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.372115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.372218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.372243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.372363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.372392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.372516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.372556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.372691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.372718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.372912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.372940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.373047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.373083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.373224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.373248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.373379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.373419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.373553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.373581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.373714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.373741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.373853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.373880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.374052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.374088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.374198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.374222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.374348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.374373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.374502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.374543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.374689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.374716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.374842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.374884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.375916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.375942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.376124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.376150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.376264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.376303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.376484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.052 [2024-07-25 20:04:29.376533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.052 qpair failed and we were unable to recover it. 00:34:20.052 [2024-07-25 20:04:29.376660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.376704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.376848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.376892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.377047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.377082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.377187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.377213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.377365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.377410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.377514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.377543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.377705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.377734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.377890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.377918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.378080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.378107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.378204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.378228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.378381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.378410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.378573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.378602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.378740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.378767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.378952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.378980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.379098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.379124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.379256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.379280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.379396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.379423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.379587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.379613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.379754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.379781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.379946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.379973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.380122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.380147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.380272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.380296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.380479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.380506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.380644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.380672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.380805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.380832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.380942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.380970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.381105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.381130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.381231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.381255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.381383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.381407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.381565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.381592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.381732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.053 [2024-07-25 20:04:29.381759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.053 qpair failed and we were unable to recover it. 00:34:20.053 [2024-07-25 20:04:29.381861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.381889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.382066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.382109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.382238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.382263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.382399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.382456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.382616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.382661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.382776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.382819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.382970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.382996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.383147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.383191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.383341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.383383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.383507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.383549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.383674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.383715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.383858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.383897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.384045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.384128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.384269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.384298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.384408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.384435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.384640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.384691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.384833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.384860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.384978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.385106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.385253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.385425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.385557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.385750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.385871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.385898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.386020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.386044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.386206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.386233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.386373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.386401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.386569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.386620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.386776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.386803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.386979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.387003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.054 [2024-07-25 20:04:29.387109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.054 [2024-07-25 20:04:29.387134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.054 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.387287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.387311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.387507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.387558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.387723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.387751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.387865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.387891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.388973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.388998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.389132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.389157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.389257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.389281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.389391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.389418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.389672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.389726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.389883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.389910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.390076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.390115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.390245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.390283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.390394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.390437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.390578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.390606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.390784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.390812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.390954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.390982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.391131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.391158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.391287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.391312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.391453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.391480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.391627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.391655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.391759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.391786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.391929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.391956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.392075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.392101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.392206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.392232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.392329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.392354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.392482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.392509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.392673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.392701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.392825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.392850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.393018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.393043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.055 [2024-07-25 20:04:29.393174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.055 [2024-07-25 20:04:29.393199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.055 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.393297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.393322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.393441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.393468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.393668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.393722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.393827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.393855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.393981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.394133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.394257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.394427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.394566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.394718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.394891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.394922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.395973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.056 [2024-07-25 20:04:29.395998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.056 qpair failed and we were unable to recover it. 00:34:20.056 [2024-07-25 20:04:29.396101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-25 20:04:29.396127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.349 qpair failed and we were unable to recover it. 00:34:20.349 [2024-07-25 20:04:29.396223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-25 20:04:29.396248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.349 qpair failed and we were unable to recover it. 00:34:20.349 [2024-07-25 20:04:29.396351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.396376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.396482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.396508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.396620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.396648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.396828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.396887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.396992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.397019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.397285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.397317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.397418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.397444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.397595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.397620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.397733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.397761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.397909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.397935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.398024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.398049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.398183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.398213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.398334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.398361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.398471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.398499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.398640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.398668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.398897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.398955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.399063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.399089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.399241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.399266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.399381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.399409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.399576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.399620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.399767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.399815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.399920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.399947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.400080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.400258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.400427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.400578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.400732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.400879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.400980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.401004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.401164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.401194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.401360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.401387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.401524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.401551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.401668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.401713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.401865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.401890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.401992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.402019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.402177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.350 [2024-07-25 20:04:29.402222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.350 qpair failed and we were unable to recover it. 00:34:20.350 [2024-07-25 20:04:29.402359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.402402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.402526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.402570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.402668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.402694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.402797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.402823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.402946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.402971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.403102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.403129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.403229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.403255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.403352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.403378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.403554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.403581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.403806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.403856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.404924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.404950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.405967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.405994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.406119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.406158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.406271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.406297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.406418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.406446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.406599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.406627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.406733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.406762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.406917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.406944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.407074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.407101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.407232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.407257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.351 [2024-07-25 20:04:29.407400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.351 [2024-07-25 20:04:29.407428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.351 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.407590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.407618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.407722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.407749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.407991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.408153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.408286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.408432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.408575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.408713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.408845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.408873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.409019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.409043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.409172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.409197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.409300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.409342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.409517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.409545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.409676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.409705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.409846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.409875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.410073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.410205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.410342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.410522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.410693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.410828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.410977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.411003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.411138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.411165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.411298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.411340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.411508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.411574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.411728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.411754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.411904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.411933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.412051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.412085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.412182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.412209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.412356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.412383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.412573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.412633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.412784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.412830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.352 [2024-07-25 20:04:29.412955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.352 [2024-07-25 20:04:29.412981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.352 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.413128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.413277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.413405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.413561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.413690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.413841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.413991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.414016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.414146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.414172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.414300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.414341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.414454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.414496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.414639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.414672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.414835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.414863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.414985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.415119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.415239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.415434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.415572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.415733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.415949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.415988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.416127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.416155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.416308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.416354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.416527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.416573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.416720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.416764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.416888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.416914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.417972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.417997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.418154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.418181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.418340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.418382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.418605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.418656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.418767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.418795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.418920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.418946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.419076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.353 [2024-07-25 20:04:29.419103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.353 qpair failed and we were unable to recover it. 00:34:20.353 [2024-07-25 20:04:29.419221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.419251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.419402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.419431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.419570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.419599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.419734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.419762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.419887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.419915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.420015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.420041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.420206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.420252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.420418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.420445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.420557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.420587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.420780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.420823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.420957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.420983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.421135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.421174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.421312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.421340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.421468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.421494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.421651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.421676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.421772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.421797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.421894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.421921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.422069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.422095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.422220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.422246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.422419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.422447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.422561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.422627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.422790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.422818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.422946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.422971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.423103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.423129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.423257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.423299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.423477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.423505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.423662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.423729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.423865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.423893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.354 [2024-07-25 20:04:29.424010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.354 [2024-07-25 20:04:29.424035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.354 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.424142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.424169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.424303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.424329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.424508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.424537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.424736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.424764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.424870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.424898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.425960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.425993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.426146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.426172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.426297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.426321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.426484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.426526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.426698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.426726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.426862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.426890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.427022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.427050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.427203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.427229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.427338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.427366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.427529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.427557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.427740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.427803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.427937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.427964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.428106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.428134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.428252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.428294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.428411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.428454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.428582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.428629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.428779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.428823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.428952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.428977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.429128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.429166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.429310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.429337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.429441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.429467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.429570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.429596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.429725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.429750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.429876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.429901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.430022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.355 [2024-07-25 20:04:29.430048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.355 qpair failed and we were unable to recover it. 00:34:20.355 [2024-07-25 20:04:29.430178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.430203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.430326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.430369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.430496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.430539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.430680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.430708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.430814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.430843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.430962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.430988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.431144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.431183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.431294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.431321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.431495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.431523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.431647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.431672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.431800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.431827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.431990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.432029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.432139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.432167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.432342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.432385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.432499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.432544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.432662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.432711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.432837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.432863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.433049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.433214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.433413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.433577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.433727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.433861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.433999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.434025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.434129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.434156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.434279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.434305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.434429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.434458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.434596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.434624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.434782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.434826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.434995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.435034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.435223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.435262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.435414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.435443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.435571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.435597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.356 [2024-07-25 20:04:29.435804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.356 [2024-07-25 20:04:29.435856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.356 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.436017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.436044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.436182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.436207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.436326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.436353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.436559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.436623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.436773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.436824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.436970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.436995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.437113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.437151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.437284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.437310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.437489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.437562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.437692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.437733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.437846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.437875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.438020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.438046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.438179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.438205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.438307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.438333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.438484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.438511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.438717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.438769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.438908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.438936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.439046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.439079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.439172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.439197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.439340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.439367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.439549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.439578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.439744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.439772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.439884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.439912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.440073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.440111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.440227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.440255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.440396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.440440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.440710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.440762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.440882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.440907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.441038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.441075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.441229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.441272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.441426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.441471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.441588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.441618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.441759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.441792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.441949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.441974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.442124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.442150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.357 [2024-07-25 20:04:29.442273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.357 [2024-07-25 20:04:29.442303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.357 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.442417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.442445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.442553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.442580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.442690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.442717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.442853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.442880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.443047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.443084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.443222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.443249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.443392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.443420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.443566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.443594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.443797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.443842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.443944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.443970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.444102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.444250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.444401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.444597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.444729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.444880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.444980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.445005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.445132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.445159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.445273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.445300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.445410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.445438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.445584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.445611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.445812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.445857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.446017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.446044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.446170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.446198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.446327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.446355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.446541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.446583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.446729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.446785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.446907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.446933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.447056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.447088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.447208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.447236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.447400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.447427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.447574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.447601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.447705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.447733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.447893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.447938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.448043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.448078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.448229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.448273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.448456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.358 [2024-07-25 20:04:29.448522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.358 qpair failed and we were unable to recover it. 00:34:20.358 [2024-07-25 20:04:29.448673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.448721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.448815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.448842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.448944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.448970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.449080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.449123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.449266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.449294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.449405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.449433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.449571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.449599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.449732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.449759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.449881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.449905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.450901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.450928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.451069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.451115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.451214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.451242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.451391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.451433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.451583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.451626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.451776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.451820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.451973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.451998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.452148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.452193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.452339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.452431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.452583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.452626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.452800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.452848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.452981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.453007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.453152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.453196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.453325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.359 [2024-07-25 20:04:29.453368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.359 qpair failed and we were unable to recover it. 00:34:20.359 [2024-07-25 20:04:29.453546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.453606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.453740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.453766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.453921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.453947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.454129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.454159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.454311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.454339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.454474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.454502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.454663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.454692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.454805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.454830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.454959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.454984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.455115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.455141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.455265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.455289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.455410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.455439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.455550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.455577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.455708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.455736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.455833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.455866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.456974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.456999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.457124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.457149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.457282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.457307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.457426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.457455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.457595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.457622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.457769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.457797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.457953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.457992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.458102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.458130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.458242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.458267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.458383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.458431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.458551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.458594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.458742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.458768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.458893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.458919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.459026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.459051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.459164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.360 [2024-07-25 20:04:29.459190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.360 qpair failed and we were unable to recover it. 00:34:20.360 [2024-07-25 20:04:29.459340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.459367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.459531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.459558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.459671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.459699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.459857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.459901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.460022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.460048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.460159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.460191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.460311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.460355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.460532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.460578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.460700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.460726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.460850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.460876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.461885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.461910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.462894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.462918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.463047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.463080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.463231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.463255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.463375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.463402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.463591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.463619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.463778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.463805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.463918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.463945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.464074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.464113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.464253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.464280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.464424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.464472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.464646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.464694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.464869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.464911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.465042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.465080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.465203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.465231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.361 qpair failed and we were unable to recover it. 00:34:20.361 [2024-07-25 20:04:29.465413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.361 [2024-07-25 20:04:29.465440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.465598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.465624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.465766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.465793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.465897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.465922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.466931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.466958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.467106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.467131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.467227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.467252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.467390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.467417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.467585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.467613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.467779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.467813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.467970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.467994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.468127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.468152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.468300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.468328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.468442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.468469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.468629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.468657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.468759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.468786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.468934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.468962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.469956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.469980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.470105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.470131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.470264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.470289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.470406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.470434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.470568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.470596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.470715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.470740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.470892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.470919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.362 [2024-07-25 20:04:29.471077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.362 [2024-07-25 20:04:29.471134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.362 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.471285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.471312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.471501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.471530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.471675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.471712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.471835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.471864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.472940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.472967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.473124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.473153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.473308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.473333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.473462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.473491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.473631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.473661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.473781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.473824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.473943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.473969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.474126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.474153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.474293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.474318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.474464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.474491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.474622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.474650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.474763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.474790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.474901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.474928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.475098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.475286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.475432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.475576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.475705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.475843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.475992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.476016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.476130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.476156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.476253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.476277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.476412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.476437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.476596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.363 [2024-07-25 20:04:29.476624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.363 qpair failed and we were unable to recover it. 00:34:20.363 [2024-07-25 20:04:29.476793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.476820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.476947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.476972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.477066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.477091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.477229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.477257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.477393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.477432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.477589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.477620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.477799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.477829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.477998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.478027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.478183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.478210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.478391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.478435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.478602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.478630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.478767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.478796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.478941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.478967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.479090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.479117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.479237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.479263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.479411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.479440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.479577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.479606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.479740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.479768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.479895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.479934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.480079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.480118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.480249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.480275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.480374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.480399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.480525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.480551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.480677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.480702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.480828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.480856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.481018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.481046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.481189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.481215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.481331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.481361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.481471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.481497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.481674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.481717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.481871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.364 [2024-07-25 20:04:29.481897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.364 qpair failed and we were unable to recover it. 00:34:20.364 [2024-07-25 20:04:29.482024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.482052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.482207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.482237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.482357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.482385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.482550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.482579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.482707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.482735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.482876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.482904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.483093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.483228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.483356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.483558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.483716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.483878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.483990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.484015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.484129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.484154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.484265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.484293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.484471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.484527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.484684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.484713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.484882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.484907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.485030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.485180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.485365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.485561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.485744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.485897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.485998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.486140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.486268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.486435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.486636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.486808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.486949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.486974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.487106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.487132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.487239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.487264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.487425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.487453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.487592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.487620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.365 [2024-07-25 20:04:29.487759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.365 [2024-07-25 20:04:29.487787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.365 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.487933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.487961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.488953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.488979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.489946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.489972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.490075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.490121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.490235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.490263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.490414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.490442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.490615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.490666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.490776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.490808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.490947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.490975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.491115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.491145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.491320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.491363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.491479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.491522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.491717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.491766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.491865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.491891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.492015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.492040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.492210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.492254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.492374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.492416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.492557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.492602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.492735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.492762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.492909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.492936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.493065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.493092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.493225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.493252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.493392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.366 [2024-07-25 20:04:29.493419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.366 qpair failed and we were unable to recover it. 00:34:20.366 [2024-07-25 20:04:29.493527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.493554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.493695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.493724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.493884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.493913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.494074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.494127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.494251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.494280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.494424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.494454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.494625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.494679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.494813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.494846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.495020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.495047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.495173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.495213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.495310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.495339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.495492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.495528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.495693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.495723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.495921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.495978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.496099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.496142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.496245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.496271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.496427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.496471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.496585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.496615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.496780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.496809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.496909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.496938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.497049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.497088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.497234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.497260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.497387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.497412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.497561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.497590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.497713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.497758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.497894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.497922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.498076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.498119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.498273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.498302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.498414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.498441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.498558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.498598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.498768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.498798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.498949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.498975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.499106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.499133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.499259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.499285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.499414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.367 [2024-07-25 20:04:29.499457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.367 qpair failed and we were unable to recover it. 00:34:20.367 [2024-07-25 20:04:29.499564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.499592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.499801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.499830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.499984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.500139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.500302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.500453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.500609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.500747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.500939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.500968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.501131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.501166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.501338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.501367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.501512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.501541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.501655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.501683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.501846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.501873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.502041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.502120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.502238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.502268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.502435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.502465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.502614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.502644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.502752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.502782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.502968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.503008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.503137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.503164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.503304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.503329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.503448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.503476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.503746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.503799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.503961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.503991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.504131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.504160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.504276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.504324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.504447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.504479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.504715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.504767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.504860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.504887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.505037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.505087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.505221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.505248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.505400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.505429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.505531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.505558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.505690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.368 [2024-07-25 20:04:29.505718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.368 qpair failed and we were unable to recover it. 00:34:20.368 [2024-07-25 20:04:29.505821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.505847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.506004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.506029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.506203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.506230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.506357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.506400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.506541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.506569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.506708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.506737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.506852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.506883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.507021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.507051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.507215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.507241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.507378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.507423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.507565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.507594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.507760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.507791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.507970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.508000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.508169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.508209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.508369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.508410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.508571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.508602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.508724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.508750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.508919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.508946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.509036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.509068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.509199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.509230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.509363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.509393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.509555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.509584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.509689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.509723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.509895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.509925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.510051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.510087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.510254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.510281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.510441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.510474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.510579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.510609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.510832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.510884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.511056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.511090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.511216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.369 [2024-07-25 20:04:29.511242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.369 qpair failed and we were unable to recover it. 00:34:20.369 [2024-07-25 20:04:29.511426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.511454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.511580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.511648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.511928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.511982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.512105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.512149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.512247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.512291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.512503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.512534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.512667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.512697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.512837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.512867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.513048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.513081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.513205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.513232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.513384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.513441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.513600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.513646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.513779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.513834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.513987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.514014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.514142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.514188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.514355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.514394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.514588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.514642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.514857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.514911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.515070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.515108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.515246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.515274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.515406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.515449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.515591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.515620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.515762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.515791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.515931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.515961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.516144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.516172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.516304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.516331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.516487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.516516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.516658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.516689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.516834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.516864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.517037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.517073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.517221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.517248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.517397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.517427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.517622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.517666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.517877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.517928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.518082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.518109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.370 qpair failed and we were unable to recover it. 00:34:20.370 [2024-07-25 20:04:29.518217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.370 [2024-07-25 20:04:29.518243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.518399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.518426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.518676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.518723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.518964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.519014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.519194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.519221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.519378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.519404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.519610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.519660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.519859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.519921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.520046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.520081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.520231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.520257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.520443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.520473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.520654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.520711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.520855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.520885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.521022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.521051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.521208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.521235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.521366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.521408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.521540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.521569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.521710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.521740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.521903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.521932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.522110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.522137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.522282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.522322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.522512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.522556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.522709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.522755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.522935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.522962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.523116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.523143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.523249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.523276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.523422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.523451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.523659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.523714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.523848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.523877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.524022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.524048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.524191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.524218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.524359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.524388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.524496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.524525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.524735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.524792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.524955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.524984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.525155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.525196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.371 [2024-07-25 20:04:29.525312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.371 [2024-07-25 20:04:29.525357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.371 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.525514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.525545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.525720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.525779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.525947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.525976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.526159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.526186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.526310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.526352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.526616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.526666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.526946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.526999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.527167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.527196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.527322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.527351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.527515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.527544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.527775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.527828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.527962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.527991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.528142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.528169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.528302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.528328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.528442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.528531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.528673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.528703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.528813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.528841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.528979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.529005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.529103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.529131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.529257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.529283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.529391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.529430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.529613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.529660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.529837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.529886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.529993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.530150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.530294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.530471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.530642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.530804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.530946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.530974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.531126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.531158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.531301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.531332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.531498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.531528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.531659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.531689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.531855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.531905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.532034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.372 [2024-07-25 20:04:29.532068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.372 qpair failed and we were unable to recover it. 00:34:20.372 [2024-07-25 20:04:29.532209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.532253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.532379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.532406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.532530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.532556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.532660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.532692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.532803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.532830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.532982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.533111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.533268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.533447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.533589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.533793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.533924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.533950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.534092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.534123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.534317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.534347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.534484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.534514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.534681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.534710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.534879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.534909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.535039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.535092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.535245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.535272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.535446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.535475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.535681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.535746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.535913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.535942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.536120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.536149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.536268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.536298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.536498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.536528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.536693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.536736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.536888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.536915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.537012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.537039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.537165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.537209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.537382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.537426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.537599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.537642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.537777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.537853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.537967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.537996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.538145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.538172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.538264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.538290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.538440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.538469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.538593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.373 [2024-07-25 20:04:29.538638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.373 qpair failed and we were unable to recover it. 00:34:20.373 [2024-07-25 20:04:29.538777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.538807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.538925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.538951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.539082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.539109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.539223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.539263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.539433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.539478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.539631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.539663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.539930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.539982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.540165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.540193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.540334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.540362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.540554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.540607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.540819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.540871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.541021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.541046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.541179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.541206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.541383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.541412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.541538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.541579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.541713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.541742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.541880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.541909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.542072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.542113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.542227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.542254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.542392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.542436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.542569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.542612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.542879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.542923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.543045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.543082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.543224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.543250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.543401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.543429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.543563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.543628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.543852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.543905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.544027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.544056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.374 [2024-07-25 20:04:29.544172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.374 [2024-07-25 20:04:29.544198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.374 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.544324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.544370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.544537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.544567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.544752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.544804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.544970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.545012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.545147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.545177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.545295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.545332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.545485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.545514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.545716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.545774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.545948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.545979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.546154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.546193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.546318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.546349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.546625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.546675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.546941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.546992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.547118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.547146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.547289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.547315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.547449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.547478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.547602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.547645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.547805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.547834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.547979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.548019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.548168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.548204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.548304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.548331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.548450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.548477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.548629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.548660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.548825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.548854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.549032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.549084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.549265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.549292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.549418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.549487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.549739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.549791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.549906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.549937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.550056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.550091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.550216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.550244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.550383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.550410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.550559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.550589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.550751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.550781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.550900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.550943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.551113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.551153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.375 [2024-07-25 20:04:29.551275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.375 [2024-07-25 20:04:29.551315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.375 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.551463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.551510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.551650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.551693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.551839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.551868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.552003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.552031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.552151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.552178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.552271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.552297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.552476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.552521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.552735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.552791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.552941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.552975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.553105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.553133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.553259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.553285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.553416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.553442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.553579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.553637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.553925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.553976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.554082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.554126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.554258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.554283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.554381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.554408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.554641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.554697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.554900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.554953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.555122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.555149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.555273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.555299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.555486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.555559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.555708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.555738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.555906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.555935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.556057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.556094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.556245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.556271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.556421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.556485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.556705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.556754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.556891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.556920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.557055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.557208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.557332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.557524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.557651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.557808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.557979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.558023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.376 [2024-07-25 20:04:29.558159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.376 [2024-07-25 20:04:29.558188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.376 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.558337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.558380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.558597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.558647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.558801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.558829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.558969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.559029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.559170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.559198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.559305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.559332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.559462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.559489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.559611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.559638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.559810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.559837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.560970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.560997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.561116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.561143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.561266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.561292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.561428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.561471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.561702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.561752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.561865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.561906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.562962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.562988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.563156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.563197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.563310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.563339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.563472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.563500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.563630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.563658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.563795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.563823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.563945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.377 [2024-07-25 20:04:29.563972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.377 qpair failed and we were unable to recover it. 00:34:20.377 [2024-07-25 20:04:29.564153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.564183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.564357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.564385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.564519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.564547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.564678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.564706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.564806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.564833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.564959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.564986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.565126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.565154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.565323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.565350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.565483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.565510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.565639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.565666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.565761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.565789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.565907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.565934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.566877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.566905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.567928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.567955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.568077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.568117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.568248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.568277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.568412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.568439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.568578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.568605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.568782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.568810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.568939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.568967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.569070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.569109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.569208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.569234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.569382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.569409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.569577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.569630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.569794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.569848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.378 [2024-07-25 20:04:29.570009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.378 [2024-07-25 20:04:29.570039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.378 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.570227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.570254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.570361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.570389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.570485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.570512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.570638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.570668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.570820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.570852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.571005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.571033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.571169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.571196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.571332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.571375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.571509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.571538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.571659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.571686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.571835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.571864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.572042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.572075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.572186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.572213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.572338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.572365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.572491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.572518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.572646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.572673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.572799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.572829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.573020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.573048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.573174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.573213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.573326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.573354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.573480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.573507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.573665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.573707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.573872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.573902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.574022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.574073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.574228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.574254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.574358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.574385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.574508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.574534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.574704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.379 [2024-07-25 20:04:29.574733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.379 qpair failed and we were unable to recover it. 00:34:20.379 [2024-07-25 20:04:29.574885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.574916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.575069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.575098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.575253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.575281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.575437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.575479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.575632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.575660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.575812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.575857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.576005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.576035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.576171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.576197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.576351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.576392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.576597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.576648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.576802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.576828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.576978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.577157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.577312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.577441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.577558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.577716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.577877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.577920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.578057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.578106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.578216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.578243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.578349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.578376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.578543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.578573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.578747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.578773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.578875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.578902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.579076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.579130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.579272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.579303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.579420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.579448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.579579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.579606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.579728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.579755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.579905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.579934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.580084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.580114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.580264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.580290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.580392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.580435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.580603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.580635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.580811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.580838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.580992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.581037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.581238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.581266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.581425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.380 [2024-07-25 20:04:29.581452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.380 qpair failed and we were unable to recover it. 00:34:20.380 [2024-07-25 20:04:29.581558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.581585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.581715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.581744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.581840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.581867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.581964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.581991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.582164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.582194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.582323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.582349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.582480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.582507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.582662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.582691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.582845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.582872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.583011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.583075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.583238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.583266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.583400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.583427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.583548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.583590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.583844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.583896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.584069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.584097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.584219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.584246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.584424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.584455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.584610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.584638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.584771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.584799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.584964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.585009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.585153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.585182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.585344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.585387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.585587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.585642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.585790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.585817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.585953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.585997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.586144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.586171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.586301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.586327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.586449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.586476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.586669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.586736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.586868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.586894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.587016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.587043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.587181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.587207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.381 [2024-07-25 20:04:29.587335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.381 [2024-07-25 20:04:29.587361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.381 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.587482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.587524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.587655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.587685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.587835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.587868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.587996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.588023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.588202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.588228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.588331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.588357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.588484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.588512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.588666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.588695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.588843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.588869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.588996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.589158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.589314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.589462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.589611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.589790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.589947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.589976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.590122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.590162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.590322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.590363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.590506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.590537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.590678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.590709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.590866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.590894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.591039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.591076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.591211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.591238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.591359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.591398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.591522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.591549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.591713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.591756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.591913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.591940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.592075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.592127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.592262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.592290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.592466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.592494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.592594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.592622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.592781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.592811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.592930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.592958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.593086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.593115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.593238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.593267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.593388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.593415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.382 qpair failed and we were unable to recover it. 00:34:20.382 [2024-07-25 20:04:29.593539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.382 [2024-07-25 20:04:29.593565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.593715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.593747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.593922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.593949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.594095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.594126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.594270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.594296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.594455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.594482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.594577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.594608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.594716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.594745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.594878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.594905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.595014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.595065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.595190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.595216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.595356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.595394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.595521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.595549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.595703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.595746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.595882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.595912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.596043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.596079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.596226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.596254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.596406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.596433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.596574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.596603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.596754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.596784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.596918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.596946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.597111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.597138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.597270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.597300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.597450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.597478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.597601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.597628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.597788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.597818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.597952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.597980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.598107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.598135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.598233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.598261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.598389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.598417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.598588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.598619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.598757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.598787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.598925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.598953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.599110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.599156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.599337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.383 [2024-07-25 20:04:29.599364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.383 qpair failed and we were unable to recover it. 00:34:20.383 [2024-07-25 20:04:29.599473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.599500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.599620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.599648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.599800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.599830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.599982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.600010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.600187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.600218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.600450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.600503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.600616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.600641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.600763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.600790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.600978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.601108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.601274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.601433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.601619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.601769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.601918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.601950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.602127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.602154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.602306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.602349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.602498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.602525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.602688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.602715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.602809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.602853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.603026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.603057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.603214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.603241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.603417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.603448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.603591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.603619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.603775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.603802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.603980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.604153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.604315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.604468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.604631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.604774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.604900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.384 [2024-07-25 20:04:29.604927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.384 qpair failed and we were unable to recover it. 00:34:20.384 [2024-07-25 20:04:29.605071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.605112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.605226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.605253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.605382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.605409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.605528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.605554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.605742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.605769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.605867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.605893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.606092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.606138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.606269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.606297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.606437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.606465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.606629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.606701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.606853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.606881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.606971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.606998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.607127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.607159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.607311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.607338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.607508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.607539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.607763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.607823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.607987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.608158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.608309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.608473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.608628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.608802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.608959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.608987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.609094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.609122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.609244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.609271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.609397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.609424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.609550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.609593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.609735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.609765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.609890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.609917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.610047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.610082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.610237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.610280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.610428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.610455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.610575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.610601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.610783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.610813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.610965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.610992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.611119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.611162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.611318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.385 [2024-07-25 20:04:29.611350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.385 qpair failed and we were unable to recover it. 00:34:20.385 [2024-07-25 20:04:29.611498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.611526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.611656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.611683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.611843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.611874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.612027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.612055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.612217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.612261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.612467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.612524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.612670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.612698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.612828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.612856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.613012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.613056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.613214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.613241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.613395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.613423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.613638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.613696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.613846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.613876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.614079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.614227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.614382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.614535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.614691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.614848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.614972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.615014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.615178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.615209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.615349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.615376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.615505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.615536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.615691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.615737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.615890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.615917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.616052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.616097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.616221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.616248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.616380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.616411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.616559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.616590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.616734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.616765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.616941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.616968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.617073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.617103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.617236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.617265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.617404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.617431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.617550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.386 [2024-07-25 20:04:29.617577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.386 qpair failed and we were unable to recover it. 00:34:20.386 [2024-07-25 20:04:29.617715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.617745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.617878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.617906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.618040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.618072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.618263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.618290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.618420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.618446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.618615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.618645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.618792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.618853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.619079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.619123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.619276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.619304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.619423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.619454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.619578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.619606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.619757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.619785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.619930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.619960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.620077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.620105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.620210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.620238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.620365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.620395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.620523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.620551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.620675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.620703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.620836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.620866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.621036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.621071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.621199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.621244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.621352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.621383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.621531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.621559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.621689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.621716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.621863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.621895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.622043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.622212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.622407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.622575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.622708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.622862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.622985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.623012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.623141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.623169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.623321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.623370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.623543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.623570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.623665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.623693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.623815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.387 [2024-07-25 20:04:29.623847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.387 qpair failed and we were unable to recover it. 00:34:20.387 [2024-07-25 20:04:29.623999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.624026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.624164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.624193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.624297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.624325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.624451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.624479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.624617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.624645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.624801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.624833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.625005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.625034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.625189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.625217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.625342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.625387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.625557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.625584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.625684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.625711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.625864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.625894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.626972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.626999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.627921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.627948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.628117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.628149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.628324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.628351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.628451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.628494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.628644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.628672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.628791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.628821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.628915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.388 [2024-07-25 20:04:29.628942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.388 qpair failed and we were unable to recover it. 00:34:20.388 [2024-07-25 20:04:29.629118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.629149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.629295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.629321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.629429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.629456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.629605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.629637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.629805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.629833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.629936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.629982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.630135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.630164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.630259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.630286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.630416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.630443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.630597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.630627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.630782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.630809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.630909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.630938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.631089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.631120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.631248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.631275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.631373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.631401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.631575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.631605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.631760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.631786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.631912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.631939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.632128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.632194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.632340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.632367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.632495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.632543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.632653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.632683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.632832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.632860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.632994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.633162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.633300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.633430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.633600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.633777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.633935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.633961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.389 [2024-07-25 20:04:29.634096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.389 [2024-07-25 20:04:29.634127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.389 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.634257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.634285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.634417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.634445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.634614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.634644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.634792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.634820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.634973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.635133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.635281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.635437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.635627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.635783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.635936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.635979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.636149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.636180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.636320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.636348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.636482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.636528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.636727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.636783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.636942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.636972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.637151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.637180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.637307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.637335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.637487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.637514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.637658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.637689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.637841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.637871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.638053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.638088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.638226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.638257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.638415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.638558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.638586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.638712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.638739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.638894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.638924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.639073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.639102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.639203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.639230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.639375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.639405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.639587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.639615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.639733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.639778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.639933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.639961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.640092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.640120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.640243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.640277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.640407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.390 [2024-07-25 20:04:29.640435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.390 qpair failed and we were unable to recover it. 00:34:20.390 [2024-07-25 20:04:29.640622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.640650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.640744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.640772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.640915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.640945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.641096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.641124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.641251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.641278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.641437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.641481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.641627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.641655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.641750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.641777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.641938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.641968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.642086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.642114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.642244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.642270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.642399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.642427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.642586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.642613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.642748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.642779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.642924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.642957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.643136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.643163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.643293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.643320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.643484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.643555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.643676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.643703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.643831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.643858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.644036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.644230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.644406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.644585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.644733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.644854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.644968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.645137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.645271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.645417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.645573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.645772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.645910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.645940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.646081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.646108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.646233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.646260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.391 [2024-07-25 20:04:29.646388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.391 [2024-07-25 20:04:29.646417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.391 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.646568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.646594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.646723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.646749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.646892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.646938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.647146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.647176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.647279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.647326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.647533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.647590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.647764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.647791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.647917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.647962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.648136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.648167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.648320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.648348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.648481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.648508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.648670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.648697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.648827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.648854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.648955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.648982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.649096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.649124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.649258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.649286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.649418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.649464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.649567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.649599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.649775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.649803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.649947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.649978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.650136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.650165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.650324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.650352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.650454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.650481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.650584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.650611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.650738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.650765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.650893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.650937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.651079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.651110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.651230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.651256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.651409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.651435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.651586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.651616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.651760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.651787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.651909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.651936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.652084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.652114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.652238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.652265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.652394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.652421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.652584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.652656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.392 qpair failed and we were unable to recover it. 00:34:20.392 [2024-07-25 20:04:29.652808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.392 [2024-07-25 20:04:29.652835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.652965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.652992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.653129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.653260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.653288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.653443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.653473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.653589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.653620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.653738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.653770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.653874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.653903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.654068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.654096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.654251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.654278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.654381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.654409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.654512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.654541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.654701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.654729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.654872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.654902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.655054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.655091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.655221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.655249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.655387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.655432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.655567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.655597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.655768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.655796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.655910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.655940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.656099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.656128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.656260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.656288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.656464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.656494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.656660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.656690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.656828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.656855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.656981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.657008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.657171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.657203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.657359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.657387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.657515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.657559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.657763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.657818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.657945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.657972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.658073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.658101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.658274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.658304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.658462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.658490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.658641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.393 [2024-07-25 20:04:29.658668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.393 qpair failed and we were unable to recover it. 00:34:20.393 [2024-07-25 20:04:29.658849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.658908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.659067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.659096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.659250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.659278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.659376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.659404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.659535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.659563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.659714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.659759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.659922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.659952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.660102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.660131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.660260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.660304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.660471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.660501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.660661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.660688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.660812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.660860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.661001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.661033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.661165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.661193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.661317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.661345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.661482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.661512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.661657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.661683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.661836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.661882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.662025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.662056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.662211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.662239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.662387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.662414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.662563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.662594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.662715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.662742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.662846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.662874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.663084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.663264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.663394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.663551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.663678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.663833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.663984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.664014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.664195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.664223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.664350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.664395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.664573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.664634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.664807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.664834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.665005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.665034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.665192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.665223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.665372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.394 [2024-07-25 20:04:29.665399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.394 qpair failed and we were unable to recover it. 00:34:20.394 [2024-07-25 20:04:29.665530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.665572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.665709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.665738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.665882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.665908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.666005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.666031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.666209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.666254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.666395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.666425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.666557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.666586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.666745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.666774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.666927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.666955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.667127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.667159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.667312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.667343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.667514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.667542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.667669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.667713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.667877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.667911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.668042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.668077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.668227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.668255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.668400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.668430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.668551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.668579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.668704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.668731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.668876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.668905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.669079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.669107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.669255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.669284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.669423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.669453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.669634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.669662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.669833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.669862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.669996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.670026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.670185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.670213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.670324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.670351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.670530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.670560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.670703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.670730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.670836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.670862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.670987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.671147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.671295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.671485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.671633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.671780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.395 qpair failed and we were unable to recover it. 00:34:20.395 [2024-07-25 20:04:29.671915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.395 [2024-07-25 20:04:29.671942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.672073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.672101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.672196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.672224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.672362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.672419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.672558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.672588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.672711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.672738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.672871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.672898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.673936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.673965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.674107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.674135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.674281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.674308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.674439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.674470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.674604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.674631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.674776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.674806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.674955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.674982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.675103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.675130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.675278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.675307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.675458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.675486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.675584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.675612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.675714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.675742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.675864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.675891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.676037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.676193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.676350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.676527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.676685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.676864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.676998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.677135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.677312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.677436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.677559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.677708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.677830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.677857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.396 [2024-07-25 20:04:29.678017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.396 [2024-07-25 20:04:29.678078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.396 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.678229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.678258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.678413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.678458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.678597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.678627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.678781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.678809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.678965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.679147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.679302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.679428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.679576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.679739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.679891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.679918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.680067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.680096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.680226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.680251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.680378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.680405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.680557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.680585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.680735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.680761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.680886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.680933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.681080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.681109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.681215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.681241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.681371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.681397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.681549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.681592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.681710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.681737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.681861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.681887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.682049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.682088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.682218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.682246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.682368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.682396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.682519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.682547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.682696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.682723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.682855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.682900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.683946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.683975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.684102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.684130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.397 [2024-07-25 20:04:29.684227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.397 [2024-07-25 20:04:29.684254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.397 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.684408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.684435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.684586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.684614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.684737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.684780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.684924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.684952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.685970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.685997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.686089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.686117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.686244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.686272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.686382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.686423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.686581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.686630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.686807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.686852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.686984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.687113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.687236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.687423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.687578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.687753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.687934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.687961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.688092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.688120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.688269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.688315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.688466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.688513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.688646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.688674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.688827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.688855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.688960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.688988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.689156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.689201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.689346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.689390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.689560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.689608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.689745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.689772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.398 [2024-07-25 20:04:29.689922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.398 [2024-07-25 20:04:29.689949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.398 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.690117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.690163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.690309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.690339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.690483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.690511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.690618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.690649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.690806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.690834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.690964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.690992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.691166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.691210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.691356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.691400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.691574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.691619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.691745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.691772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.691864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.691891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.692037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.692085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.692280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.692312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.692422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.692452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.692588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.692617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.692835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.692887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.692997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.693027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.693158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.693186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.693353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.693382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.693510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.693537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.693713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.693743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.693881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.693911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.694007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.694036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.694233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.694274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.694438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.694490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.694638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.694683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.399 qpair failed and we were unable to recover it. 00:34:20.399 [2024-07-25 20:04:29.694858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.399 [2024-07-25 20:04:29.694905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.694996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.695024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.695164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.695191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.695313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.695358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.695532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.695576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.695714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.695759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.695923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.695950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.696079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.696107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.696230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.696256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.696409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.696438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.696667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.696724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.696887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.696916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.697036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.697070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.697234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.697279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.697454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.697486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.697635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.697693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.697903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.697953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.698084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.698130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.698261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.698289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.698404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.698435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.698607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.698642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.698750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.698780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.698949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.698980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.699094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.699139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.699242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.699271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.699478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.699541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.699663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.699707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.699809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.699837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.699988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.700016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.700166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.700213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.700354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.700398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.700572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.700616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.700769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.700796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.700929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.700956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.701078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.701106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.701224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.701255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.701423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.701468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.701619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.701663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.701797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.701829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.701935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.701963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.702137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.702183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.702293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.702322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.702474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.702519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.400 [2024-07-25 20:04:29.702622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.400 [2024-07-25 20:04:29.702649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.400 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.702756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.702784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.702939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.702966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.703096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.703125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.703255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.703282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.703413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.703440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.703592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.703619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.703748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.703775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.703876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.703903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.704005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.704033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.704216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.704262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.704401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.704446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.704570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.704598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.704751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.704778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.704902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.704930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.705030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.705071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.705186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.705217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.705390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.705434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.705564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.705592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.705710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.705751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.705898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.705939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.706045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.706084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.706306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.706337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.706450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.706479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.706647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.706691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.706818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.706846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.707010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.707037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.707159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.707200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.707390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.707434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.707602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.707657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.707808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.707835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.707965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.707993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.708162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.708196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.708309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.708339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.708441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.708471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.708664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.708714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.708931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.708983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.709126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.709173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.709311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.709341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.709504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.709534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.709750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.709811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.709962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.709991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.401 [2024-07-25 20:04:29.710129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.401 [2024-07-25 20:04:29.710156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.401 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.710275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.710305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.710530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.710589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.710752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.710780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.710912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.710939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.711064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.711091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.711198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.711225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.711331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.711358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.711519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.711587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.711843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.711897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.712028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.712058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.712235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.712264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.712372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.712402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.712531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.712560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.712662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.712691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.712830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.712860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.713013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.713071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.713220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.713253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.713401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.713432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.713597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.713627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.713744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.713772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.713967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.713997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.714154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.714182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.714332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.714362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.714487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.714513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.714692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.714721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.714929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.714982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.715100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.715125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.715258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.715285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.715427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.715456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.715624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.715698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.715838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.715867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.716010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.716037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.716178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.716205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.716304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.716333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.716459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.716490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.716640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.716671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.716834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.716864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.717015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.717044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.717186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.402 [2024-07-25 20:04:29.717213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.402 qpair failed and we were unable to recover it. 00:34:20.402 [2024-07-25 20:04:29.717350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.717378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.717507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.717550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.717655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.717685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.717796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.717826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.717972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.717999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.718157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.718185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.718309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.718339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.718505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.718534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.718674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.718704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.718824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.718853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.718994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.719023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.719144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.719171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.719304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.719333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.719490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.719520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.719698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.719727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.719859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.719889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.720069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.720187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.720366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.720497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.720637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.720844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.720987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.721017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.721166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.721194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.721324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.721351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.721440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.721467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.721652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.721682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.721885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.721914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.722033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.722069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.722180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.722207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.722328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.722354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.722493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.722522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.722695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.722725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.722858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.722888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.723023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.723052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.723188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.723215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.723359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.723389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.723553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.723582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.723698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.723727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.403 qpair failed and we were unable to recover it. 00:34:20.403 [2024-07-25 20:04:29.723913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.403 [2024-07-25 20:04:29.723943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.724087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.724115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.724249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.724276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.724407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.724433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.724555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.724598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.724740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.724769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.724932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.724961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.725126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.725168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.725344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.725376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.725507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.725559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.725709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.725740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.725874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.725905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.726070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.726205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.726367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.726544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.726717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.726857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.726995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.727137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.727271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.727451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.727575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.727773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.727926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.727970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.728147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.728175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.728279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.728306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.728424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.728453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.728591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.728621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.728726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.728755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.728934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.728992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.729133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.729174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.729285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.729315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.729465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.729496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.729639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.729670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.729812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.729843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.730020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.730055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.730192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.730223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.730345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.730390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.730507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.730538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.730768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.730816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.730945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.730973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.404 qpair failed and we were unable to recover it. 00:34:20.404 [2024-07-25 20:04:29.731074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.404 [2024-07-25 20:04:29.731102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.731199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.731227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.731321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.731349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.731479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.731507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.731637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.731665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.731795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.731822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.731930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.731957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.732110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.732140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.732258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.732288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.732404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.732447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.732638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.732667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.732790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.732818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.732942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.732969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.733094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.733122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.733276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.733303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.733425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.733455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.733659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.733689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.733822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.733851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.734014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.734044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.734205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.734245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.734393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.734425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.734566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.734618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.734819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.734870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.735001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.735029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.735157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.735202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.735367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.735434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.735552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.735595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.735765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.735793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.735951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.735980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.736122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.736152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.736258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.736288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.736450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.736480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.736639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.736668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.736840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.736892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.737064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.737091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.737223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.737250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.737427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.737457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.737617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.737669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.737807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.737836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.737988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.738018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.738166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.738192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.738293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.738334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.405 qpair failed and we were unable to recover it. 00:34:20.405 [2024-07-25 20:04:29.738446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.405 [2024-07-25 20:04:29.738489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.738630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.738659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.738791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.738820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.738963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.738990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.739139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.739166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.739292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.739319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.739478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.739511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.739638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.739682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.739816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.739845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.739958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.739987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.740160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.740187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.740326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.740371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.740519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.740562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.740794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.740823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.740989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.741153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.741308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.741475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.741639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.741796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.741969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.741999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.742138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.742179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.742348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.742389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.742516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.742567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.742741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.742785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.742939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.742967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.743100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.743129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.743275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.743307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.743448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.743478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.743580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.743610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.743738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.743780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.743959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.743989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.744129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.744157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.744277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.744304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.744450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.744480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.744623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.744652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.744756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.744786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.744920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.744949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.745051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.745086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.745220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.745247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.745399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.745443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.745586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.745615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.745737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.745783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.745960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.406 [2024-07-25 20:04:29.745988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.406 qpair failed and we were unable to recover it. 00:34:20.406 [2024-07-25 20:04:29.746117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.407 [2024-07-25 20:04:29.746145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.407 qpair failed and we were unable to recover it. 00:34:20.407 [2024-07-25 20:04:29.746242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.407 [2024-07-25 20:04:29.746269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.407 qpair failed and we were unable to recover it. 00:34:20.407 [2024-07-25 20:04:29.746395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.407 [2024-07-25 20:04:29.746422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.407 qpair failed and we were unable to recover it. 00:34:20.407 [2024-07-25 20:04:29.746576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.407 [2024-07-25 20:04:29.746610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.407 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.746752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.746783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.746918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.746948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.747126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.747154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.747255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.747283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.747397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.747428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.747559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.747589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.747726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.747755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.747866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.747896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.748025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.748055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.748213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.748246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.748349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.748391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.748520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.748550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.748676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.748720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.748867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.748897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.749074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.749102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.749225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.749255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.749362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.749392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.749524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.749554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.749664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.749694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.749852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.749882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-25 20:04:29.750022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-25 20:04:29.750052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.750202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.750248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.750413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.750443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.750552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.750580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.750732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.750762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.750939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.750986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.751145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.751180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.751328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.751359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.751475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.751506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.751673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.751703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.751844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.751874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.752033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.752071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.752211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.752239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.752393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.752421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.752570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.752600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.752772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.752817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.752961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.752989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.753088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.753116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.753206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.753234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.753338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.753368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.753549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.753579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.753720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.753751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.753936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.753966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.754140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.754181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.754289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.754317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.754482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.754509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.754604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.754630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.754822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.754848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.755004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.755031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.755168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.755198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.755321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.755365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.755578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.755608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.755777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.755807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.755921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.755955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.756128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-25 20:04:29.756169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-25 20:04:29.756300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.756329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.756445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.756492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.756635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.756665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.756830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.756875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.756978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.757006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.757131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.757159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.757285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.757313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.757434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.757478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.757616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.757661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.757817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.757845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.757980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.758008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.758129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.758171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.758323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.758381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.758490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.758535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.758649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.758678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.758860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.758920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.759064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.759095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.759289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.759321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.759533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.759581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.759705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.759749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.759903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.759931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.760056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.760089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.760203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.760233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.760370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.760401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.760569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.760596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.760706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.760734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.760867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.760895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.761065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.761096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.761227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.761255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.761433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.761463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.761630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.761660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.761766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.761796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.761971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.761998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.762185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.762296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.762340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.762564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.762629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.762861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.762915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-25 20:04:29.763070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-25 20:04:29.763097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.763253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.763284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.763525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.763578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.763730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.763775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.763915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.763946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.764132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.764161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.764268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.764296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.764391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.764437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.764578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.764608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.764774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.764805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.764934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.764965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.765130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.765171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.765305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.765350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.765484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.765513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.765640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.765667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.765799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.765829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.765957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.765984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.766111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.766138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.766293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.766320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.766466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.766510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.766647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.766677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.766815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.766844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.767069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.767113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.767210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.767237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.767399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.767429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.767606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.767636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.767749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.767779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.767944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.767974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.768200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.768231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.768357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.768400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.768529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.768558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.768737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.768767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.768896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.768942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.769072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.769101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.769233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-25 20:04:29.769261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-25 20:04:29.769393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.769437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.769574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.769605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.769740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.769770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.769939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.769969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.770145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.770173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.770323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.770354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.770519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.770549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.770687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.770717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.770898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.770929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.771100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.771130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.771279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.771310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.771475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.771505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.771639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.771669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.771829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.771859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.771993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.772033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.772183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.772224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.772379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.772410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.772547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.772578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.772715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.772745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.772887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.772916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.773043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.773083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.773213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.773241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.773369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.773401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.773576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.773606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.773772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.773802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.773947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.773975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.774106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.774134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.774292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.774319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.774494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.774524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.774777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.774848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.775036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.775070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.775203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.775230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.775360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.775386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.775537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.775596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.775730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.775773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.775936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-25 20:04:29.775966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-25 20:04:29.776107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.776135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.776240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.776267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.776411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.776440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.776558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.776601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.776763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.776793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.776938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.776969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.777113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.777142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.777274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.777302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.777454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.777498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.777639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.777669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.777777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.777808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.777942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.777987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.778121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.778150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.778250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.778277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.778492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.778522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.778677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.778707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.778844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.778874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.778981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.779133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.779292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.779451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.779656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.779786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.779924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.779955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.780100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.780128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.780232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.780259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.780359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.780402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.780532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.780562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.780680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-25 20:04:29.780710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-25 20:04:29.780845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.780875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.781067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.781126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.781233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.781262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.781441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.781471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.781635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.781682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.781826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.781871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.781993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.782020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.782192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.782220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.782376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.782406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.782520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.782554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.782705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.782758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.782925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.782955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.783091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.783135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.783279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.783309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.783444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.783473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.783615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.783644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.783742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.783771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.783890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.783919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.784077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.784138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.784271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.784303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.784466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.784497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.784630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.784660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.784804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.784834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.784981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.785013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.785169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.785197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.785329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.785359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.785532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.785577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.785724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.785769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.785920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.785948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.786120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.786166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.786300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.786330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.786460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.786489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.786634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.786665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.786793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.786820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.786952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.786980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-25 20:04:29.787105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-25 20:04:29.787160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.787301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.787337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.787484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.787514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.787654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.787686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.787824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.787855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.788019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.788049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.788172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.788199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.788363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.788394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.788498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.788541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.788701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.788732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.788920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.788947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.789101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.789130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.789258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.789285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.789394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.789423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.789564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.789594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.789709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.789739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.789905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.789934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.790051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.790089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.790229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.790256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.790406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.790454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.790597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.790642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.790787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.790833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.790985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.791012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.791166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.791215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.791333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.791378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.791525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.791570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.791714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.791754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.791879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.791908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.792032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.792065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.792205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.792232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.792364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.792391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.792547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.792573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.792720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.792767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.792923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.792950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.793080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.793109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.793250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-25 20:04:29.793295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-25 20:04:29.793443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.793486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.793631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.793676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.793802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.793831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.793960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.793988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.794089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.794134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.794274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.794308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.794413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.794443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.794581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.794611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.794755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.794806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.794943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.794986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.795084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.795112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.795264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.795291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.795460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.795490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.795643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.795713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.795834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.795877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.796039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.796074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.796222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.796248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.796392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.796421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.796560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.796590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.796725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.796752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.796896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.796926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.797070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.797114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.797210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.797237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.797377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.797407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.797549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.797578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.797717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.797748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.797912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.797953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.798071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.798102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.798235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.798262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.798436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.798480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.798596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.798628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.798827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.798872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.799003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.799032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.799154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.799181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.799285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.799313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-25 20:04:29.799558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-25 20:04:29.799623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.799729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.799772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.800901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.800931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.801120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.801162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.801266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.801294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.801434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.801479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.801596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.801629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.801767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.801797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.801939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.801966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.802095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.802123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.802349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.802378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.802569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.802598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.802757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.802784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.802918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.802945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.803099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.803126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.803249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.803277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.803431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.803461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.803588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.803615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.803794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.803824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.803944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.803974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.804122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.804150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.804299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.804327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.804477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.804507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.804638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.804683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.804851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.804881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.805033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.805069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.805224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.805254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.805432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.805481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.805591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.805620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-25 20:04:29.805800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-25 20:04:29.805857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.805988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.806018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.806178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.806225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.806513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.806564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.806735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.806781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.806915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.806943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.807063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.807105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.807257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.807287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.807454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.807484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.807640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.807704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.807952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.808129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.808300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.808470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.808609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.808807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.808963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.808990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.809145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.809192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.809318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.809367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.809628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.809679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.809804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.809832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.809938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.809967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.810117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.810147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.810310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.810339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.810452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.810483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.810623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.810684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.810825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.810854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.811039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.811076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.811176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.811204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.811375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-25 20:04:29.811421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-25 20:04:29.811664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.811718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.811851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.811879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.812016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.812044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.812216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.812244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.812380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.812410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.812545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.812574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.812767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.812835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.812975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.813004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.813154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.813181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.813300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.813332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.813564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.813617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.813767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.813811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.813921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.813949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.814078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.814111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.814285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.814315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.814528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.814576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.814837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.814892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.815042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.815079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.815234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.815280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.815431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.815461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.815624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.815668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.815794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.815823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.815946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.815973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.816106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.816133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.816258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.816289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.816434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.816464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.816594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.816624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.816764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.816794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.816932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.816961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.817171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.817198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.817331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.817377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.817572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.817599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.817721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.817767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.817899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.817927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.818075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.818134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.818252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.818298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.818419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-25 20:04:29.818451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-25 20:04:29.818594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.818625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.818765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.818796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.818899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.818931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.819072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.819122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.819252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.819280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.819443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.819472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.819686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.819715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.819829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.819858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.820024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.820055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.820208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.820238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.820405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.820436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.820602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.820633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.820850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.820881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.821020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.821051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.821183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.821211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.821355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.821385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.821518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.821549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.821686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.821716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.821863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.821922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.822086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.822116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.822261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.822306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.822455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.822500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.822629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.822675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.822828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.822856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.822950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.822978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.823080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.823108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.823239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.823266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.823429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.823456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.823585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.823614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.823745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.823773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.823873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.823902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.824000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.824028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.824169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.824213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.824390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.824421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.824558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.824588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.824694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.824724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.824893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-25 20:04:29.824922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-25 20:04:29.825045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.825083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.825214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.825247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.825423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.825454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.825588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.825618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.825732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.825763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.825906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.825937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.826092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.826120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.826265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.826295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.826512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.826542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.826707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.826737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.826875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.826905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.827011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.827053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.827183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.827210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.827325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.827354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.827606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.827660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.827822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.827851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.828074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.828101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.828194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.828221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.828375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.828402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.828628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.828685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.828866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.828895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.829022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.829051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.829179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.829208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.829354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.829398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.829575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.829620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.829857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.829910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.830056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.830113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.830286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.830330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.830526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.830579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.830777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.830833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.830961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.830988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.831108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.831139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.831325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.831373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.831518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.831562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.831756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.831822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.831956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.831986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.832130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.832157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-25 20:04:29.832277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-25 20:04:29.832307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.832470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.832500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.832615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.832645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.832783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.832813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.832943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.832984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.833115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.833144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.833297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.833342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.833545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.833599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.833750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.833795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.833923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.833950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.834107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.834138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.834303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.834332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.834497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.834563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.834803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.834855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.834962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.834992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.835149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.835176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.835324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.835355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.835489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.835519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.835663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.835692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.835849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.835895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.836042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.836096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.836235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.836280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.836477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.836550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.836734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.836779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.836918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.836945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.837101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.837133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.837246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.837276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.837391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.837421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.837637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.837692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.837856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.837886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.837997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.838026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.838182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.838210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.838311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.838353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.838517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.838546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-25 20:04:29.838776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-25 20:04:29.838806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.838946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.838975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.839090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.839133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.839291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.839351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.839510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.839555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.839728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.839797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.839925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.839952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.840910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.840937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.841911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.841938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.842037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.842077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.842260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.842310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.842443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.842470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.842574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.842601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.842730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.842757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.842915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.842942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.843078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.843107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.843242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.843269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.843393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.843420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.843551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.843622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.843730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.843758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-25 20:04:29.843869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-25 20:04:29.843910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.844044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.844080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.844256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.844303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.844517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.844572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.844774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.844827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.844933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.844961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.845116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.845148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.845313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.845343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.845478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.845508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.845620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.845649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.845825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.845851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.845942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.845972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.846100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.846127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.846261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.846288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.846442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.846472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.846639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.846669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.846808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.846838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.846967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.846993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.847101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.847128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.847279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.847306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.847490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.847519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.847677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.847706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.847845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.847875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.848016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.848047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.848208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.848235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.848367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.848394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.848569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.848598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.848829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.848884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.849028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.849057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.849296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.849323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.849487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.849517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.849726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.849768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.849936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.849966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.850114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.850141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-25 20:04:29.850241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-25 20:04:29.850269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.850418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.850448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.850650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.850718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.850847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.850877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.850987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.851016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.851156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.851184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.851335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.851379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.851511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.851541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.851711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.851740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.851878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.851908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.852021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.852051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.852207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.852235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.852328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.852354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.852538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.852567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.852726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.852755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.852920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.852949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.853125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.853153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.853285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.853312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.853443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.853471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.853613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.853642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.853813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.853843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.854009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.854038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.854170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.854198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.854349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.854376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.854559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.854622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.854751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.854781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.855006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.855035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.855163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.855189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.855308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.855335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.855434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.855462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.855591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.855618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.855788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.855814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.856046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.856083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.856178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.856205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.856356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.856382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.856538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.856567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.856812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.856860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.856961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-25 20:04:29.856989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-25 20:04:29.857155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.857196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.857354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.857402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.857588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.857634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.857811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.857869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.857991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.858018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.858153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.858181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.858334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.858364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.858503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.858553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.858702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.858746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.858876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.858903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.859009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.859037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.859199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.859245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.859395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.859426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.859628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.859678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.859840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.859869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.860015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.860042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.860190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.860231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.860387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.860419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.860536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.860568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.860733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.860764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.860921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.860948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.861083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.861111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.861237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.861281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.861422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.861453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.861617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.861647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.861769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.861812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.861928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.861973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.862110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.862138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.862270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.862298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.862492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.862523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.862656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.862687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.862856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.862886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.863054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-25 20:04:29.863105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-25 20:04:29.863250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.863280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.863452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.863483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.863622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.863652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.863787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.863817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.863947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.863978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.864132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.864161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.864277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.864318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.864492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.864536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.864686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.864719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.864862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.864892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.865070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.865098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.865225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.865252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.865433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.865465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.865579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.865610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.865749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.865786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.865931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.865962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.866128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.866157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.866284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.866312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.866481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.866511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.866637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.866682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.866851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.866881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.867046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.867085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.867224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.867251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.867378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.867421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.867564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.867594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.867697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.867728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.867868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.867899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.868044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.868079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.868221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.868261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.868410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.868457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.868637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.868681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.868830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.868858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.869009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.869037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.869154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.869196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.869304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-25 20:04:29.869332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-25 20:04:29.869435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.869462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.869618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.869648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.869861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.869917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.870048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.870087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.870250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.870279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.870484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.870538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.870681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.870716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.870896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.870945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.871081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.871111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.871239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.871266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.871415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.871446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.871566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.871611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.871780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.871811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.871934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.871965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.872143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.872170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.872291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.872319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.872446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.872477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.872741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.872791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.872953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.872984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.873139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.873166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.873325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.873352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.873495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.873525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.873669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.873699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.873928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.873958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.874098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.874126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.874255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.874283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.874407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.874434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.874599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.874629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.874845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-25 20:04:29.874875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-25 20:04:29.875016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.875043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.875174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.875201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.875355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.875382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.875592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.875645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.875765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.875800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.875972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.876002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.876126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.876154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.876246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.876273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.876411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.876441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.876591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.876650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.876804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.876830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.877054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.877086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.877216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.877243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.877416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.877446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.877619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.877677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.877819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.877849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.877992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.878021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.878173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.878201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.878330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.878357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.878487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.878514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.878675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.878705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.878827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.878870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.879041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.879219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.879339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.879494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.879673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.879834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.879989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.880016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.880160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.880201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.880343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.880383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.880571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.880618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.880826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.880854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.880982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.881009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.881141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-25 20:04:29.881169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-25 20:04:29.881314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.881344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.881492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.881519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.881647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.881674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.881805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.881833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.881968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.881995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.882120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.882147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.882269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.882296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.882391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.882418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.882543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.882586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.882727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.882757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.882913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.882940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.883066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.883093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.883236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.883265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.883405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.883435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.883573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.883603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.883740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.883770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.883903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.883933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.884077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.884123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.884277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.884304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.884441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.884471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.884649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.884676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.884894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.884923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.885066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.885110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.885214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.885242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.885370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.885401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.885566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.885595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.885709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.885739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.885868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.885898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.886029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.886065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.886234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.886261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.886414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.886441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.886566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.886611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.886824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.886854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.886974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.887001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.887119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.887147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-25 20:04:29.887267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-25 20:04:29.887294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.887421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.887448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.887580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.887627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.887791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.887821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.887941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.887971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.888126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.888154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.888281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.888308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.888484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.888529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.888695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.888724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.888840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.888870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.888973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.889000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.889161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.889188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.889286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.889312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.889449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.889491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.889623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.889652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.889774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.889803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.889973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.890002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.890176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.890218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.890370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.890401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.890519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.890563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.890726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.890756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.890936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.890989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.891175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.891203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.891329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.891374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.891481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.891524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.891685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.891717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.891819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.891850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.891999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.892029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.892175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.892203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.892300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.892331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.892458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.892485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.892633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.892663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.892807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.892839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.892977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.893021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.893125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.893154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.893306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.893353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.893494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.893525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-25 20:04:29.893687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-25 20:04:29.893717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.893853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.893884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.893999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.894029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.894219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.894260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.894414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.894461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.894639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.894685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.894840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.894885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.895050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.895085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.895218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.895246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.895395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.895426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.895573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.895605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.895774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.895804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.895946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.895976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.896100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.896128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.896283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.896311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.896467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.896497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.896665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.896696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.896803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.896835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.896950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.896981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.897152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.897193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.897331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.897371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.897470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.897515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.897651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.897682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.897898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.897952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.898067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.898113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.898217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.898244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.898353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.898380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.898504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.898531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.898693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.898722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.898917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.898946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.899126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.899154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.899283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.899310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.899481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.899513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.899704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.899758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-25 20:04:29.899875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-25 20:04:29.899908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.900057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.900092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.900248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.900275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.900426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.900457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.900677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.900738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.900873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.900903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.901071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.901119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.901220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.901248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.901404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.901431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.901631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.901662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.901806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.901836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.901975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.902005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.902143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.902171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.902319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.902348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.902483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.902513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.902626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.902656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.902832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.902862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.902996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.903026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.903187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.903215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.903328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.903358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.903492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.903522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.903652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.903682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.903845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.903875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.903995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.904022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.904158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.904186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.904326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.904367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.904520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.904567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.904711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.904757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.904914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.904941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.905046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.905088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.905242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.905287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.905459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.905504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.905676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.905719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-25 20:04:29.905817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-25 20:04:29.905845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.905982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.906011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.906185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.906226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.906380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.906412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.906631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.906692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.906802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.906850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.907032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.907073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.907246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.907275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.907495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.907524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.907729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.907759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.907932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.907961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.908081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.908121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.908259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.908287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.908436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.908464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.908735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.908786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.908926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.908956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.909117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.909145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.909272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.909299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.909486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.909550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.909760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.909789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.909952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.909982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.910128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.910155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.910255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.910281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.910425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.910486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.910615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.910642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.910853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.910904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.911021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.911048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.911207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.911234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.911326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.911373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.911483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.911510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.911690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.911719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.911846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.911893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.912103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.912131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.912265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.912292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.912428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.912456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-25 20:04:29.912635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-25 20:04:29.912664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.912795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.912825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.912995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.913024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.913178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.913219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.913381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.913410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.913584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.913647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.913903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.913955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.914113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.914142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.914283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.914328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.914571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.914600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.914768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.914813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.914947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.914975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.915089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.915133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.915286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.915316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.915479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.915509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.915644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.915674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.915807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.915837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.915978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.916008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.916162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.916189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.916313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.916343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.916453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.916483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.916614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.916643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.916782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.916811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.917023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.917053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.917210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.917239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.917364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.917410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.917546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.917576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.917721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.917751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.917858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.917889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.918046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.918080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.918284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.918311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.918453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.918483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.918588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.918618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.918758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.918788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.918931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.918961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.919116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-25 20:04:29.919144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-25 20:04:29.919264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.919294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.919458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.919487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.919599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.919634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.919771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.919802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.919964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.919993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.920166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.920206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.920353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.920393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.920544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.920591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.920828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.920884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.921032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.921170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.921347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.921536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.921687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.921864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.921990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.922137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.922308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.922480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.922614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.922783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.922953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.922983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.923134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.923162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.923293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.923320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.923447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.923491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.923632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.923662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.923787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.923831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.923994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.924024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.924182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.924210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.924337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.924368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.924500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.924543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.924678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.924708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.924812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.924841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.925052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.925087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.925233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.925260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-25 20:04:29.925388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-25 20:04:29.925414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.925517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.925545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.925714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.925757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.925960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.925987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.926115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.926143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.926265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.926292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.926433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.926463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.926666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.926724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.926867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.926897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.927084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.927125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.927247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.927287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.927443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.927474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.927626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.927655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.927799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.927831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.927944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.927973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.928157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.928184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.928294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.928324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.928489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.928517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.928667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.928712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.928870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.928900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.929038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.929087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.929262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.929312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.929530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.929584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.929697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.929727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.929887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.929916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.930065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.930091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.930224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.930251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.930376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.930403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.930548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.930578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.930744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.930774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.930941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.930971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-25 20:04:29.931126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-25 20:04:29.931153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.931282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.931308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.931482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.931511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.931711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.931741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.931887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.931916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.932950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.932976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.933105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.933133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.933291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.933318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.933468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.933497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.933605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.933632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.933785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.933815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.933955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.933984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.934109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.934137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.934264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.934290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.934405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.934434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.934564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.934594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.934725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.934754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.934903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.934932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.935040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.935076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.935227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.935253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.935348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.935394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.935599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.935629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.935769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.935798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.935942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.935971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.936122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.936153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.936310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.936336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-25 20:04:29.936429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-25 20:04:29.936455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.936598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.936627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.936740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.936770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.936934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.936963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.937150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.937191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.937353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.937382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.937506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.937550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.937695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.937740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.937866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.937893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.938042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.938244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.938409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.938612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.938762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.938893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.938987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.939118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.939272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.939423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.939607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.939760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.939920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.939947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.940133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.940165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.940305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.940336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.940476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.940505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.940635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.940662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.940795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.940822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.940939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.940966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.941104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.941136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.941308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.941352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.941520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.941568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.941694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.941721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.941841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.941868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.941966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.941994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.942112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.942140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.942292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.942319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.942445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.942491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-25 20:04:29.942616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-25 20:04:29.942643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.942769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.942801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.942954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.942981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.943145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.943190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.943362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.943393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.943499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.943529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.943640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.943669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.943776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.943805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.943944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.943975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.944150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.944178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.944325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.944372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.944540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.944586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.944736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.944780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.944908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.944936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.945039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.945071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.945228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.945272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.945452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.945500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.945648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.945692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.945847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.945874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.945977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.946004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.946144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.946190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.946343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.946389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.946562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.946608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.946731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.946758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.946891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.946918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.947042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.947077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.947192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.947223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.947416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.947461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.947662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.947690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.947823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.947851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.948000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.948040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.948227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.948259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.948434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.948465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.948577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.948606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.948718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.948749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.948914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.948943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.949102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.949130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.949261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-25 20:04:29.949306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-25 20:04:29.949452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.949482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.949664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.949693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.949844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.949871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.949961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.949993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.950937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.950966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.951125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.951153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.951251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.951278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.951380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.951408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.951555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.951585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.951721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.951750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.951869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.951897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.952033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.952065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.952190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.952218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.952337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.952367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.952509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.952539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.952691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.952721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.952910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.952957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.953095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.953123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.953264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.953294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.953421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.953466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.953610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.953655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.953758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.953786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.953881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.953909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.954035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.954070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.954204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.954235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.954375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.954402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.954526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.954554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.954713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-25 20:04:29.954740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-25 20:04:29.954869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.954897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.955888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.955992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.956020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.956173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.956223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.956349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.956394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.956570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.956614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.956746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.956774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.956866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.956893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.957014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.957042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.957204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.957235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.957349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.957378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.957514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.957544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.957686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.957715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.957858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.957887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.958004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.958033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.958193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.958220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.958391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.958421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.958539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.958569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.958735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.958764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.958901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.958930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.959075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.959120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.959216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-25 20:04:29.959243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-25 20:04:29.959381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.959408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.959499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.959525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.959661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.959690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.959852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.959882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.960008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.960037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.960176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.960217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.960372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.960418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.960566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.960616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.960775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.960820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.960920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.960947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.961050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.961085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.961229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.961275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.961423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.961469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.961587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.961615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.961746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.961775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.961930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.961958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.962105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.962136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.962327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.962375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.962522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.962566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.962694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.962721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.962874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.962902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.963022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.963057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.963209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.963240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.963395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.963440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.963583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.963626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.963731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.963759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.963890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.963918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.964045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.964083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.964230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.964275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.964425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.964469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.964592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.964638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.964791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.964832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.964965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.964994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-25 20:04:29.965125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-25 20:04:29.965153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.965259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.965287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.965398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.965425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.965553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.965580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.965711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.965741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.965866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.965894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.966046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.966083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.966204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.966234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.966431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.966476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.966592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.966638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.966751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.966780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.966905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.966932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.967031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.967064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.967216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.967245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4134120 Killed "${NVMF_APP[@]}" "$@" 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.967375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.967405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.967512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.967542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:20.727 [2024-07-25 20:04:29.967678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.967708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:20.727 [2024-07-25 20:04:29.967868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.967898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.727 [2024-07-25 20:04:29.968004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.968034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:20.727 [2024-07-25 20:04:29.968231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.968259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.727 [2024-07-25 20:04:29.968422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.968450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.968628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.968673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.968812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.968842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.968986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.969014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.969205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.969250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.969353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.969379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.969509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.969554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.969694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.969737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.969846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.969873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.970046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.970093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.970258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.970289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.970457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.970488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.970610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.970640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.970749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.970780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.970927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-25 20:04:29.970956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-25 20:04:29.971087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.971116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.971244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.971288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.971444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.971490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.971630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.971678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.971785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.971814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.971989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.972030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.972171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4134628 00:34:20.728 [2024-07-25 20:04:29.972211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4134628 00:34:20.728 [2024-07-25 20:04:29.972367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.972401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.972533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.972563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4134628 ']' 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.972697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.728 [2024-07-25 20:04:29.972727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.972827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.728 [2024-07-25 20:04:29.972856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.728 [2024-07-25 20:04:29.972997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.973025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.728 [2024-07-25 20:04:29.973174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.973218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 20:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.728 [2024-07-25 20:04:29.973378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.973415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.973534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.973564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.973732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.973761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.973886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.973918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.974065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.974092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.974201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.974228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.974352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.974379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.974540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.974569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.974694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.974736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.974900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.974928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.975064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.975099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.975221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.975247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.975417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.975446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.975587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.975616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.975838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.975866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.975997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.976026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.976162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.976188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.976369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.976413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.976519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.976547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.976685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.976715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-25 20:04:29.976841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-25 20:04:29.976867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.977053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.977106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.977219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.977245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.977374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.977405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.977545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.977574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.977739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.977768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.977903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.977932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.978046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.978084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.978233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.978259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.978420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.978448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.978589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.978618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.978749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.978792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.978929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.978969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.979092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.979120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.979287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.979333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.979454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.979498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.979622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.979649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.979783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.979811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.979969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.979996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.980118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.980149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.980311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.980356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.980499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.980544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.980779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.980828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.980982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.981009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.981214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.981241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.981357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.981387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.981543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.981588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.981799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.981844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.981969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.981996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.982121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.982160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.982326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.982353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.982460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.982485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.982612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.982638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.982779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.982809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.982916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.982944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.983133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.983161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-25 20:04:29.983373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-25 20:04:29.983400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.983524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.983553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.983694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.983724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.983928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.983957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.984072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.984125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.984227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.984253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.984360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.984388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.984494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.984521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.984667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.984698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.984859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.984888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.985042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.985073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.985200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.985225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.985336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.985375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.985497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.985528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.985693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.985723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.985860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.985889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.986024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.986054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.986212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.986241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.986373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.986400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.986536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.986566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.986698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.986727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.986862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.986890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.987027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.987056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.987196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.987224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.987352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.987382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.987532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.987560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.987705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.987732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.987911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.987937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.988069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.988096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.988217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.988242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.988354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-25 20:04:29.988382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-25 20:04:29.988529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.988555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.988707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.988735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.988847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.988875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.989897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.989924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.990046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.990082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.990225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.990250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.990375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.990417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.990550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.990577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.990705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.990732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.990850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.990875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.991048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.991214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.991348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.991528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.991688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.991854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.991983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.992132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.992256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.992412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.992568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.992719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.992872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.992899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.993069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.993095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.993193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.993218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.993343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.993368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.993481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.993507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.993673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.993713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-25 20:04:29.993819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-25 20:04:29.993848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.993992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.994869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.994990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.995187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.995317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.995463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.995592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.995715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.995905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.995933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.996943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.996983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.997113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.997141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.997291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.997318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.997443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.997469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.997598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.997623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.997727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.997754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.997883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.997909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.998054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.998086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.998222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.998249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.998393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.998420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.998525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.998551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-25 20:04:29.998655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-25 20:04:29.998683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.998779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.998805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.998929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.998955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.999082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.999109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.999230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.999256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.999387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.999414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.999535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.999560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.999713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.999739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:29.999868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:29.999895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.000054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.000194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.000347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.000503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.000680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.000840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.000983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.001160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.001349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.001494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.001658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.001781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.001909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.001936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.002963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.002990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.003935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.733 [2024-07-25 20:04:30.003962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.733 qpair failed and we were unable to recover it. 00:34:20.733 [2024-07-25 20:04:30.004083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.004111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.004244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.004271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.004415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.004442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.004567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.004593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.004717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.004743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.004856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.004886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.005969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.005998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.006967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.006994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.007909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.007936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.008880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.008982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.009009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.009150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.734 [2024-07-25 20:04:30.009177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.734 qpair failed and we were unable to recover it. 00:34:20.734 [2024-07-25 20:04:30.009283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.009309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.009416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.009443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.009542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.009569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.009717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.009743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.009840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.009866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.009991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.010132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.010312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.010461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.010594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.010796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.010970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.010998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.011111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.011139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.011233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.011260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.011371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.011399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.011532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.011560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.011704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.011735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.011888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.011915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.012861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.012901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.735 [2024-07-25 20:04:30.013034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.735 [2024-07-25 20:04:30.013079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.735 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.013187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.013213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.013313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.013339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.013451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.013477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.013584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.013610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.013744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.013770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.013899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.013925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.014019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.014045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.014154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.014181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.736 [2024-07-25 20:04:30.014280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.736 [2024-07-25 20:04:30.014307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.736 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.014405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.014432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.014565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.014591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.014688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.014714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.014829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.014868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.014978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.015126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.015261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.015398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.015529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.015683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.015848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.015878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.016911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.016938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.017138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.017304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.017450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.017582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.017742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.017877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.017976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.018878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.018981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.019126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.019257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.019399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.019577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.019718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.019841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.019868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.737 [2024-07-25 20:04:30.020003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.737 [2024-07-25 20:04:30.020029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.737 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.020170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.020209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.020386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.020414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.020547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.020576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.020682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.020710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.020843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.020870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021460] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:20.738 [2024-07-25 20:04:30.021488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021523] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.738 [2024-07-25 20:04:30.021593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.021876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.021978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.022842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.022960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.023889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.023983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.024159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.024296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.024420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.024599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.024729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.024899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.024940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.025052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.025086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.025191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.025218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.025344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.025375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.025485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.738 [2024-07-25 20:04:30.025512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.738 qpair failed and we were unable to recover it. 00:34:20.738 [2024-07-25 20:04:30.025640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.025668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.025790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.025817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.025943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.025969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.026948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.026974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.027077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.027106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.027243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.027271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.027435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.027461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.027563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.027589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.027688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.027719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.027849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.027877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.028871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.028911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.029021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.029050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.029170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.029197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.029312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.029344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.029475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.029503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.029602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.029628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.033157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.033200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.033327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.033357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.033466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.033502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.033627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.033658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.033800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.033833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.033969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.034002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.034142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.034177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.034315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.034347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.034496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.034532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.034669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.739 [2024-07-25 20:04:30.034705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.739 qpair failed and we were unable to recover it. 00:34:20.739 [2024-07-25 20:04:30.034819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.034848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.034961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.034991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.035117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.035147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.035300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.035349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.035518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.035545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.035644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.035671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.035803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.035829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.035955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.035981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.036149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.036290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.036428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.036583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.036710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.036862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.036997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.037144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.037268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.037425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.037584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.037768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.037897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.037923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.038019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.038046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.038186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.038213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.038327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.038362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.038492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.038519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.038683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.038723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.038839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.038868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.039027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.039076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.039216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.039242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.039408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.039434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.039564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.039590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.039721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.039748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.039900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.039926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.040950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.040979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.740 [2024-07-25 20:04:30.041082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.740 [2024-07-25 20:04:30.041110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.740 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.041222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.041249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.041354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.041380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.041479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.041506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.041630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.041657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.041768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.041796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.041950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.041977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.042113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.042141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.042265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.042293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.042454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.042481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.042608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.042635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.042740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.042767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.042925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.042951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.043094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.043122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.043291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.043331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.043459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.043485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.043612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.043638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.043740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.043768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.043862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.043888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.044887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.044913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.045019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.045046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.741 [2024-07-25 20:04:30.045205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.741 [2024-07-25 20:04:30.045250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.741 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.045386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.045414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.045639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.045666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.045780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.045807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.045916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.045943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.046971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.046998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.047935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.047962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.048071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.048099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.048204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.048230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.048439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.048466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.048602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.048629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.048726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.048752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.048909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.048936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.049070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.049097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.049256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.049283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.049402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.049453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.049692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.049720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.049854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.049882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.049984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.050017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.050133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.742 [2024-07-25 20:04:30.050161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.742 qpair failed and we were unable to recover it. 00:34:20.742 [2024-07-25 20:04:30.050291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.050329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.050470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.050496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.050611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.050652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.050788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.050814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.050943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.050971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.051102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.051137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.051296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.051323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.051462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.051489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.051625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.051657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.051867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.051894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.051994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.052166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.052322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.052490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.052629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.052788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.052944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.052971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.053130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.053258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.053414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.053559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.053714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.053844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.053977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.054121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.054278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.054413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.054598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.054755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.054880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.054907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.055964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.743 [2024-07-25 20:04:30.055993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.743 qpair failed and we were unable to recover it. 00:34:20.743 [2024-07-25 20:04:30.056131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.056159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.056287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.056313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.056451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.056478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.056632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.056658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.056788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.056814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.056946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.056972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.057132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.057159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.057291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.057319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.057424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.057450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.057555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.057582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.057740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.057766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.057892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.057923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.058096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.058247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.058376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.058554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.058683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.058839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.058990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.059117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.059285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.059437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.059592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.059770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.059929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.059955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.060103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.060255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.060414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.060540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.060683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.060834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.060996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.061022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.061172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.061200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.061331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.061363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.061515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.061542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.061701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.061728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.061886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.061912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.744 [2024-07-25 20:04:30.062007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.744 [2024-07-25 20:04:30.062035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.744 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.062196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.062223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.062355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.062385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.062508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.062535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.062665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.062692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.062814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.062841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.745 [2024-07-25 20:04:30.062943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.062969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.063100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.063128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.063260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.063286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.063441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.063468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.063626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.063653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.063791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.063817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.063959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.063985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.064142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.064169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.064275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.064307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.064473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.064500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.064626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.064652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.064759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.064785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.064893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.064921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.065071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.065098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.065230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.065257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.065421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.065453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.065590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.065616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.065747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.065774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.065910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.065936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.066137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.066258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.066388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.066561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.066699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.066866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.066998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.067024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.067168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.067196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.067345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.067394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.067530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.067558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.067689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.067716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.067891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.067918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.068150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.745 [2024-07-25 20:04:30.068179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.745 qpair failed and we were unable to recover it. 00:34:20.745 [2024-07-25 20:04:30.068335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.068371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.068486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.068513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.068647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.068675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.068803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.068830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.068957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.068996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.069134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.069263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.069452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.069586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.069709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.069856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.069980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.070130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.070286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.070431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.070584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.070735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.070898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.070924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.071070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.071253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.071403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.071531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.071684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.071840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.071995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.072021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.072145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.072172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.072295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.072321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.072482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.072508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.072661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.072688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.072848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.072874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-25 20:04:30.073029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.746 [2024-07-25 20:04:30.073080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.073962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.073988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.074111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.074139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.074294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.074319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.074417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.074447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.074566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.074592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.074747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.074773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.074899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.074926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.075080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.075238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.075374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.075604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.075753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.075903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.075999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.076186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.076318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.076448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.076571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.076698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.076874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.076914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.077117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.077248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.077398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.077556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.077711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.077871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.077999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.078025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.078159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.078186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.078315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.078341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.078455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.078481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.078573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.078599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-25 20:04:30.078728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.747 [2024-07-25 20:04:30.078754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.078851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.078877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.079017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.079042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.079187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.079215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.079322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.079350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.079508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.079534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.079667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.079694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.079826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.079853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.080860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.080886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.081891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.081917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.082915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.082941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.083949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.083975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.084114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.084141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.084276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.084302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.748 [2024-07-25 20:04:30.084465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.748 [2024-07-25 20:04:30.084499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.748 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.084603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.084629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.084756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.084782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.084887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.084912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.085042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.085191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.085352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.085530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.085682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.085827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.085991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.086174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.086326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.086484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.086635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.086813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.086966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.086993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.087133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.087296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.087441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.087594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.087751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.087870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.087977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.088139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.088259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.088418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.088590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.088746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.088870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.088896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.089892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.089918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.090039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.090070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.749 [2024-07-25 20:04:30.090224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.749 [2024-07-25 20:04:30.090250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.749 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.090346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.090372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.090490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.090516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.090644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.090670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.090760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.090786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.090914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.090944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.091118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.091247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.091389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.091546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.091711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.091844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.091985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.092141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.092318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.092473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.092640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.092794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.092969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.092995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.093892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.093919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.094865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.094892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-07-25 20:04:30.095020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-07-25 20:04:30.095046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.751 [2024-07-25 20:04:30.095178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-07-25 20:04:30.095205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-25 20:04:30.095300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.095327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.095436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.095463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.095604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.095630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.095756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.095783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.095915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.095941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.096073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.096100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.096256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.096282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.096875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:21.035 [2024-07-25 20:04:30.097218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.097249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.097386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.097424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.097556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.097583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.097709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.097735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.097851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.097877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.097980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.098116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.098292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.098475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.098610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.098764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.098907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.098934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.099961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.099989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.100096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.100124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.100268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.100294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.100409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.100441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.100575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.100601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.100744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.100771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.100898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.100924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.101057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.101094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.101203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.101230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.101327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.101366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.101501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.101527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.101637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.101665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-25 20:04:30.101779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-25 20:04:30.101806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.101917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.101943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.102955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.102983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.103151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.103180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.103282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.103309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.103415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.103447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.103586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.103612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.103792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.103818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.103948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.103976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.104112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.104140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.104273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.104304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.104441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.104468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.104568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.104594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.104700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.104726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.104870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.104910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.105045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.105194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.105344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.105515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.105671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.105851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.105978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.106825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.106989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.107014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.107132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.107158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.107318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-25 20:04:30.107344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-25 20:04:30.107441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.107467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.107564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.107589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.108582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.108611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.108803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.108829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.108926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.108952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.109661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.109688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.109868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.109894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.110892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.110990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.111134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.111270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.111430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.111585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.111739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.111891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.111924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.112965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.112991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.113116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.113142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.113243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.113269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.113385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.113427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.113560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.113586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.113741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.113767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.113870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.113895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.114007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.114032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.114155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.114181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.114291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.114317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-25 20:04:30.114476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-25 20:04:30.114501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.114605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.114631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.115490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.115519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.115688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.115715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.115845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.115871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.115977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.116003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.116137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.116164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.116261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.116286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.116386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.116411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.116517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.116543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.116681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.116707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.119898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.119932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.120100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.120129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.120242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.120270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.120418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.120444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.120606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.120632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.120759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.120784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.120911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.120937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.121948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.121973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.122944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.122970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-25 20:04:30.123973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-25 20:04:30.123999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.124102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.124129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.124237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.124264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.124373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.124398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.124561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.124587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.124688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.124715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.124845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.124871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.125911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.125937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.126934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.126959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.127952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.127977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.128098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.128123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.128222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-25 20:04:30.128247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-25 20:04:30.128348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.128384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.128510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.128536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.128685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.128710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.128814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.128839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.128964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.128989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.129182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.129223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.129358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.129385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.129549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.129575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.129701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.129727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.129875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.129902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.130934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.130959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.131095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.131121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.131254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.131279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.131394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.131422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.131546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.131572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.131726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.131752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.131856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.131882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.132849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.132875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.133944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.133971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-25 20:04:30.134069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-25 20:04:30.134095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.134243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.134369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.134493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.134624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.134758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.134879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.134978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.135110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.135248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.135410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.135602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.135730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.135883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.135909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.136845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.136871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.137005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.137032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.137797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.137827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.137935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.137961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.138092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.138119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.138267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.138306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.138488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.138515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.138609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.138636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.138777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.138803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.138924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.138957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.139893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.139920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.140044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.140084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-25 20:04:30.140182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-25 20:04:30.140209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.140325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.140350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.140444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.140471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.140565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.140591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.140692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.140718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.140839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.140866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.140965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.140992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.141116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.141143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.141256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.141285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.141456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.141506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.141642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.141670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.141774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.141802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.141939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.141971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.142949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.142988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.143952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.143978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.144909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.144934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.145029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.145055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.145160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.145185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.145303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.145341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.145456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-25 20:04:30.145483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-25 20:04:30.145609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.145636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.145732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.145758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.145896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.145922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.146109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.146276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.146399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.146563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.146727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.146858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.146991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.147958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.147987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.148135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.148259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.148403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.148587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.148768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.148887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.148988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.149123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.149283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.149478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.149646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.149783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.149907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.149933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.150967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.150994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.151141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-25 20:04:30.151170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-25 20:04:30.151301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.151327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.151454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.151480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.151588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.151614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.151756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.151782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.151881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.151907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.152963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.152990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.153137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.153270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.153466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.153595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.153728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.153857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.153992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.154147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.154297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.154431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.154583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.154762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.154881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.154907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.155097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.155123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.155257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.155284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.155396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.155422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.155561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.155587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.155731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.155759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-25 20:04:30.155864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-25 20:04:30.155894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.156880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.156908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.157937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.157963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.158104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.158132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.158230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.158256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.159387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.159417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.159608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.159635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.159744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.159772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.159898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.159925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.160965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.160991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.161164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.161204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.161336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.161372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.161469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.161495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.161642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.161668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.161774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.161808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.161905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.161931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.162069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.162098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.162228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.162255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.162353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.162382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.162515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.162540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-25 20:04:30.162667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-25 20:04:30.162693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.162804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.162831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.162933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.162961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.163081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.163126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.163243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.163271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.163400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.163432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.163538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.163565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.163694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.163721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.164529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.164558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.164743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.164771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.165520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.165561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.165764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.165792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.166712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.166753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.166978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.167122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.167282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.167464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.167603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.167735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.167895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.167922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.168937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.168977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.169967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.169993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.170135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.170162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.170263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.170290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.170429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.170455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.170554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-25 20:04:30.170590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-25 20:04:30.170740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.170776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.170919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.170947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.171965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.171991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.172926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.172953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.173969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.173996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.174107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.174135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.174236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.174262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.174362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.174388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.174539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.174566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.174707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.174735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.174840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.174866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.175876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.175903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.176001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.176028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-25 20:04:30.176162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-25 20:04:30.176188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.176294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.176322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.176448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.176474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.176576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.176602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.176707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.176733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.176837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.176864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.176958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.176984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.177930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.177955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.178906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.178932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.179036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.179094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.179235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.179261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.179405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.179449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.179605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.179632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.179781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.179809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.179906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.179932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.180968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-25 20:04:30.180996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-25 20:04:30.181109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.181136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.181241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.181272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.181434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.181460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.181563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.181589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.181693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.181719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.181844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.181871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.181990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.182955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.182993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.183155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-25 20:04:30.183184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-25 20:04:30.183321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.183348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.183457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.183484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.183585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.183611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.183744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.183770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.183869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.183894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.184897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.184922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.185879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.185905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.186940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.186965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.187883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.187909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.188017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.188043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.188156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.188182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.188280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.188305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.188417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.188442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.188549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-25 20:04:30.188575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-25 20:04:30.188695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.188720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.188860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.188885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.189899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.189924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.190912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.190939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.191954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.191982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.192950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.192977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.193097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.193126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.193226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.193253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.193261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.051 [2024-07-25 20:04:30.193294] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.051 [2024-07-25 20:04:30.193309] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.051 [2024-07-25 20:04:30.193322] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.051 [2024-07-25 20:04:30.193332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.051 [2024-07-25 20:04:30.193349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.193374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.193475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-25 20:04:30.193501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-25 20:04:30.193517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:21.051 [2024-07-25 20:04:30.193547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:21.051 [2024-07-25 20:04:30.193627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.193653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.193594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:21.052 [2024-07-25 20:04:30.193598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:21.052 [2024-07-25 20:04:30.193752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.193777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.193887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.193915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.194859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.194885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.195847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.195978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.196935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.196961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.197099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.197128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.197256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.197284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.197402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.197449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.197553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.197582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.197712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.197740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.197867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.197894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-25 20:04:30.198885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-25 20:04:30.198918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.199863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.199994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.200853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.200981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.201157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.201321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.201459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.201614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.201739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.201895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.201923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.202867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.202999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.203192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.203347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.203490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.203608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.203747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.203886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.203926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.204070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.204098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.204193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.204219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-25 20:04:30.204309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-25 20:04:30.204335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.204435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.204461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.204572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.204598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.204703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.204731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.204829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.204857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.204988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.205139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.205259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.205389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.205540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.205708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.205859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.205898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.206930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.206970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.207878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.207905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-25 20:04:30.208847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-25 20:04:30.208873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.208969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.208995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.209882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.209909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.210915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.210941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.211898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.211996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.212916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.212942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-25 20:04:30.213824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-25 20:04:30.213853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.213988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.214175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.214305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.214450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.214576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.214704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.214842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.214889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.215873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.215902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.216874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.216977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.217948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.217977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.218934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.218961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.219092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-07-25 20:04:30.219121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-07-25 20:04:30.219219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.219246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.219343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.219371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.219524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.219551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.219652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.219679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.219788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.219816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.219916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.219944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.220920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.220948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.221886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.221914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.222867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.222907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.223952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.223982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.224089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.224118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.224217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.224243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-07-25 20:04:30.224343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-07-25 20:04:30.224372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.224486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.224512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.224631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.224657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.224749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.224776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.224888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.224929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.225896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.225923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.226921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.226948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.227969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.227996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.228113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.228144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.228287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.228317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.228449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.228477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.228579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.228607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.228740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.228767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-07-25 20:04:30.228897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-07-25 20:04:30.228924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.229878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.229977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.230910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.230938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.231888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.231915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.232861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.232889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.233849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.233985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.059 [2024-07-25 20:04:30.234012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.059 qpair failed and we were unable to recover it. 00:34:21.059 [2024-07-25 20:04:30.234112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.234264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.234390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.234515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.234670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.234830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.234956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.234983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.235934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.235962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.236857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.236885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.237962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.237989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.238884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.238912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.239043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.239076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.060 [2024-07-25 20:04:30.239190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.060 [2024-07-25 20:04:30.239229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.060 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.239337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.239367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.239461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.239489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.239599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.239627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.239781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.239809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.239940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.239970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.240886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.240913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.241915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.241955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.242913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.242943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.243090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.243223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.243356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.243515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.243673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.243837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.243972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.244012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.244128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.244158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.244258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.244287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.244396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.244424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.244555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.244583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.061 [2024-07-25 20:04:30.244681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.061 [2024-07-25 20:04:30.244710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.061 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.244842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.244870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.244973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.245961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.245989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.246888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.246992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.247878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.247992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.248137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.248271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.248420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.248550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.248690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.248877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.248904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.249827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.062 [2024-07-25 20:04:30.249869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.062 qpair failed and we were unable to recover it. 00:34:21.062 [2024-07-25 20:04:30.250013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.250174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.250296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.250450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.250571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.250706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.250870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.250912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.251885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.251983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.252959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.252986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.253864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.253994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.254022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.254136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.254164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.254272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.254312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.063 [2024-07-25 20:04:30.254416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.063 [2024-07-25 20:04:30.254445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.063 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.254582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.254611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.254712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.254740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.254859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.254888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.255852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.255880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.256880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.256908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.257853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.257968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.258118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.258283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.258419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.258582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.258741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.258876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.258904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.064 qpair failed and we were unable to recover it. 00:34:21.064 [2024-07-25 20:04:30.259780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.064 [2024-07-25 20:04:30.259807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.259935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.259964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.260930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.260957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.261891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.261919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.262956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.262983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.263893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.263921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.065 [2024-07-25 20:04:30.264848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.065 qpair failed and we were unable to recover it. 00:34:21.065 [2024-07-25 20:04:30.264942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.264969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.265973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.265999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.266912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.266939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.267913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.267941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.268932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.268960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.269916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.269944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.270044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.066 [2024-07-25 20:04:30.270081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.066 qpair failed and we were unable to recover it. 00:34:21.066 [2024-07-25 20:04:30.270206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.270233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.270333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.270361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.270455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.270483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.270582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.270612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.270711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.270741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.270867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.270896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.270992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.271160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.271285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.271433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.271558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.271681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.271815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.271850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.272919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.272947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.273873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.273901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.274870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.274977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.275004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.275104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.275133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.275243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.275271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.275392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.067 [2024-07-25 20:04:30.275420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.067 qpair failed and we were unable to recover it. 00:34:21.067 [2024-07-25 20:04:30.275525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.275553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.275683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.275711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.275828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.275859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.275966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.275994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.276928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.276970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.277900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.277991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.278941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.278968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.279125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.279278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.279395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.279522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.279674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.068 [2024-07-25 20:04:30.279796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.068 qpair failed and we were unable to recover it. 00:34:21.068 [2024-07-25 20:04:30.279894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.279923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.280885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.280984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.281132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.281261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.281385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.281540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.281703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.281843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.281885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.282868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.282980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.283934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.283962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.284923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.284951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.069 [2024-07-25 20:04:30.285069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.069 [2024-07-25 20:04:30.285098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.069 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.285220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.285248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.285343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.285371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.285497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.285525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.285624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.285652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.285773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.285801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.285892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.285920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.286853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.286882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.287860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.287888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.288959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.288986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.289123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.289274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.289429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.289595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.289755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.289879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.289974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.290001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.290121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.290151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.290243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.290270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.070 qpair failed and we were unable to recover it. 00:34:21.070 [2024-07-25 20:04:30.290373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.070 [2024-07-25 20:04:30.290400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.290503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.290531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.290658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.290685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.290810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.290839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.290948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.290989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.291160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.291319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.291481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.291604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.291740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.291857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.291990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.292921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.292961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.293970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.293998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.294884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.294914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.295011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.295038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.295146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.295174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.295272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.295300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.295438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.295466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.295569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.071 [2024-07-25 20:04:30.295597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.071 qpair failed and we were unable to recover it. 00:34:21.071 [2024-07-25 20:04:30.295704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.295731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.295860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.295887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.296868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.296908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.297893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.297924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.298938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.298966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.299099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.299127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.299270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.299302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.299426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.299454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.299582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.299610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.299723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.299753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.299886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.299913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.300041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.300077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.300178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.300206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.300309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.300336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.072 qpair failed and we were unable to recover it. 00:34:21.072 [2024-07-25 20:04:30.300486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.072 [2024-07-25 20:04:30.300513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.300645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.300673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.300828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.300856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.301864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.301971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.302951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.302979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.303131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.303295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.303452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.303609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.303761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.303894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.303994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.304897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.304994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.073 [2024-07-25 20:04:30.305829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.073 qpair failed and we were unable to recover it. 00:34:21.073 [2024-07-25 20:04:30.305930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.305957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.306047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.306176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.306302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c840 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 A controller has encountered a failure and is being reset. 00:34:21.074 [2024-07-25 20:04:30.306447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc95c000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.306578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc96c000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.306741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.306916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.306946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.307107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.307160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.307272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.307301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.307396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.307424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.307522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.307550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc964000b90 with addr=10.0.0.2, port=4420 00:34:21.074 qpair failed and we were unable to recover it. 00:34:21.074 [2024-07-25 20:04:30.307698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.074 [2024-07-25 20:04:30.307747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9aa390 with addr=10.0.0.2, port=4420 00:34:21.074 [2024-07-25 20:04:30.307768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aa390 is same with the state(5) to be set 00:34:21.074 [2024-07-25 20:04:30.307797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9aa390 (9): Bad file descriptor 00:34:21.074 [2024-07-25 20:04:30.307818] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.074 [2024-07-25 20:04:30.307833] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.074 [2024-07-25 20:04:30.307849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.074 Unable to reset the controller. 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 Malloc0 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 [2024-07-25 20:04:30.384953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 [2024-07-25 20:04:30.413246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.074 20:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4134227 00:34:22.456 Controller properly reset. 00:34:27.721 Initializing NVMe Controllers 00:34:27.721 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:27.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:27.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:27.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:27.721 Initialization complete. Launching workers. 00:34:27.721 Starting thread on core 1 00:34:27.721 Starting thread on core 2 00:34:27.721 Starting thread on core 3 00:34:27.721 Starting thread on core 0 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:27.721 00:34:27.721 real 0m10.622s 00:34:27.721 user 0m32.432s 00:34:27.721 sys 0m7.999s 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:27.721 ************************************ 00:34:27.721 END TEST nvmf_target_disconnect_tc2 00:34:27.721 ************************************ 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:27.721 rmmod nvme_tcp 00:34:27.721 rmmod nvme_fabrics 00:34:27.721 rmmod nvme_keyring 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 4134628 ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 4134628 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 4134628 ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 4134628 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4134628 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4134628' 00:34:27.721 killing process with pid 4134628 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 4134628 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 4134628 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.721 20:04:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.622 20:04:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.622 00:34:29.622 real 0m15.299s 00:34:29.622 user 0m57.537s 00:34:29.622 sys 0m10.352s 00:34:29.622 20:04:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.622 20:04:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:29.622 ************************************ 00:34:29.622 END TEST nvmf_target_disconnect 00:34:29.622 ************************************ 00:34:29.622 20:04:38 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:29.622 20:04:38 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.622 20:04:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.622 20:04:38 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:29.622 00:34:29.622 real 26m59.257s 00:34:29.622 user 74m20.377s 00:34:29.622 sys 6m25.480s 00:34:29.622 20:04:38 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.622 20:04:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.622 ************************************ 00:34:29.622 END TEST nvmf_tcp 00:34:29.622 ************************************ 00:34:29.622 20:04:38 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:29.622 20:04:38 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.622 20:04:38 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:29.622 20:04:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:29.622 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:34:29.622 ************************************ 00:34:29.622 START TEST spdkcli_nvmf_tcp 00:34:29.622 ************************************ 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.622 * Looking for test storage... 00:34:29.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:29.622 20:04:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4135821 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4135821 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 4135821 ']' 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:29.623 20:04:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.623 [2024-07-25 20:04:38.755582] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:29.623 [2024-07-25 20:04:38.755664] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135821 ] 00:34:29.623 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.623 [2024-07-25 20:04:38.813132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:29.623 [2024-07-25 20:04:38.898676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.623 [2024-07-25 20:04:38.898679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.623 20:04:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:29.623 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:29.623 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:29.623 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:29.623 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:29.623 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:29.623 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:29.623 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.623 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.623 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:29.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:29.623 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:29.623 ' 00:34:32.158 [2024-07-25 20:04:41.575381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.539 [2024-07-25 20:04:42.791674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:36.071 [2024-07-25 20:04:45.070597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:37.977 [2024-07-25 20:04:47.041088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:39.349 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:39.349 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:39.349 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:39.349 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:39.349 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:39.349 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:39.349 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:39.349 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.349 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.349 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:39.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:39.349 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:39.349 20:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.918 20:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:39.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:39.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:39.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:39.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:39.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:39.918 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:39.918 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:39.918 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:39.918 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:39.918 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:39.918 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:39.918 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:39.918 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:39.918 ' 00:34:45.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:45.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:45.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:45.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:45.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:45.188 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:45.188 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.188 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:45.188 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:45.188 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:45.188 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:45.188 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:45.188 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 4135821 ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4135821' 00:34:45.188 killing process with pid 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4135821 ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4135821 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 4135821 ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 4135821 00:34:45.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4135821) - No such process 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 4135821 is not found' 00:34:45.188 Process with pid 4135821 is not found 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:45.188 00:34:45.188 real 0m15.922s 00:34:45.188 user 0m33.678s 00:34:45.188 sys 0m0.775s 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:45.188 20:04:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.188 ************************************ 00:34:45.188 END TEST spdkcli_nvmf_tcp 00:34:45.188 ************************************ 00:34:45.188 20:04:54 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.188 20:04:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:45.188 20:04:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:45.188 20:04:54 -- common/autotest_common.sh@10 -- # set +x 00:34:45.452 ************************************ 00:34:45.452 START TEST nvmf_identify_passthru 00:34:45.452 ************************************ 00:34:45.452 20:04:54 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.452 * Looking for test storage... 00:34:45.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.452 20:04:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.452 20:04:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.452 20:04:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.452 20:04:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.452 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.452 20:04:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.452 20:04:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.452 20:04:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.452 20:04:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.452 20:04:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.453 20:04:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.453 20:04:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.453 20:04:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.453 20:04:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.453 20:04:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:45.453 20:04:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:45.453 20:04:54 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:45.453 20:04:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:47.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:34:47.413 00:34:47.413 --- 10.0.0.2 ping statistics --- 00:34:47.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.413 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:34:47.413 00:34:47.413 --- 10.0.0.1 ping statistics --- 00:34:47.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.413 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:47.413 20:04:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:47.414 20:04:56 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:47.414 20:04:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:47.414 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.608 20:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:51.608 20:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:51.608 20:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:51.608 20:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:51.608 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4140316 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:55.798 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4140316 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 4140316 ']' 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:55.798 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.798 [2024-07-25 20:05:05.161326] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:55.798 [2024-07-25 20:05:05.161448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.798 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.798 [2024-07-25 20:05:05.227160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.057 [2024-07-25 20:05:05.316744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.057 [2024-07-25 20:05:05.316801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.057 [2024-07-25 20:05:05.316815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.057 [2024-07-25 20:05:05.316826] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.057 [2024-07-25 20:05:05.316836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.057 [2024-07-25 20:05:05.316901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.057 [2024-07-25 20:05:05.316958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.057 [2024-07-25 20:05:05.317026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.057 [2024-07-25 20:05:05.317028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:56.057 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.057 INFO: Log level set to 20 00:34:56.057 INFO: Requests: 00:34:56.057 { 00:34:56.057 "jsonrpc": "2.0", 00:34:56.057 "method": "nvmf_set_config", 00:34:56.057 "id": 1, 00:34:56.057 "params": { 00:34:56.057 "admin_cmd_passthru": { 00:34:56.057 "identify_ctrlr": true 00:34:56.057 } 00:34:56.057 } 00:34:56.057 } 00:34:56.057 00:34:56.057 INFO: response: 00:34:56.057 { 00:34:56.057 "jsonrpc": "2.0", 00:34:56.057 "id": 1, 00:34:56.057 "result": true 00:34:56.057 } 00:34:56.057 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.057 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.057 INFO: Setting log level to 20 00:34:56.057 INFO: Setting log level to 20 00:34:56.057 INFO: Log level set to 20 00:34:56.057 INFO: Log level set to 20 00:34:56.057 INFO: Requests: 00:34:56.057 { 00:34:56.057 "jsonrpc": "2.0", 00:34:56.057 "method": "framework_start_init", 00:34:56.057 "id": 1 00:34:56.057 } 00:34:56.057 00:34:56.057 INFO: Requests: 00:34:56.057 { 00:34:56.057 "jsonrpc": "2.0", 00:34:56.057 "method": "framework_start_init", 00:34:56.057 "id": 1 00:34:56.057 } 00:34:56.057 00:34:56.057 [2024-07-25 20:05:05.470291] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:56.057 INFO: response: 00:34:56.057 { 00:34:56.057 "jsonrpc": "2.0", 00:34:56.057 "id": 1, 00:34:56.057 "result": true 00:34:56.057 } 00:34:56.057 00:34:56.057 INFO: response: 00:34:56.057 { 00:34:56.057 "jsonrpc": "2.0", 00:34:56.057 "id": 1, 00:34:56.057 "result": true 00:34:56.057 } 00:34:56.057 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.057 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.057 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.057 INFO: Setting log level to 40 00:34:56.057 INFO: Setting log level to 40 00:34:56.057 INFO: Setting log level to 40 00:34:56.057 [2024-07-25 20:05:05.480236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.315 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.315 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:56.315 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.315 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.315 20:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:56.315 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.315 20:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.600 Nvme0n1 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.600 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.600 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.600 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.600 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.600 [2024-07-25 20:05:08.366964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.601 [ 00:34:59.601 { 00:34:59.601 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:59.601 "subtype": "Discovery", 00:34:59.601 "listen_addresses": [], 00:34:59.601 "allow_any_host": true, 00:34:59.601 "hosts": [] 00:34:59.601 }, 00:34:59.601 { 00:34:59.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.601 "subtype": "NVMe", 00:34:59.601 "listen_addresses": [ 00:34:59.601 { 00:34:59.601 "trtype": "TCP", 00:34:59.601 "adrfam": "IPv4", 00:34:59.601 "traddr": "10.0.0.2", 00:34:59.601 "trsvcid": "4420" 00:34:59.601 } 00:34:59.601 ], 00:34:59.601 "allow_any_host": true, 00:34:59.601 "hosts": [], 00:34:59.601 "serial_number": "SPDK00000000000001", 00:34:59.601 "model_number": "SPDK bdev Controller", 00:34:59.601 "max_namespaces": 1, 00:34:59.601 "min_cntlid": 1, 00:34:59.601 "max_cntlid": 65519, 00:34:59.601 "namespaces": [ 00:34:59.601 { 00:34:59.601 "nsid": 1, 00:34:59.601 "bdev_name": "Nvme0n1", 00:34:59.601 "name": "Nvme0n1", 00:34:59.601 "nguid": "7F2BC9431B7A4CEA8757F779C3824EF9", 00:34:59.601 "uuid": "7f2bc943-1b7a-4cea-8757-f779c3824ef9" 00:34:59.601 } 00:34:59.601 ] 00:34:59.601 } 00:34:59.601 ] 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:59.601 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:59.601 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:59.601 20:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:59.601 rmmod nvme_tcp 00:34:59.601 rmmod nvme_fabrics 00:34:59.601 rmmod nvme_keyring 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 4140316 ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 4140316 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 4140316 ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 4140316 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4140316 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4140316' 00:34:59.601 killing process with pid 4140316 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 4140316 00:34:59.601 20:05:08 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 4140316 00:35:01.503 20:05:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:01.503 20:05:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:01.503 20:05:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:01.503 20:05:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.503 20:05:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:01.503 20:05:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.503 20:05:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.503 20:05:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.407 20:05:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:03.407 00:35:03.407 real 0m17.890s 00:35:03.407 user 0m26.936s 00:35:03.407 sys 0m2.212s 00:35:03.407 20:05:12 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:03.407 20:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.407 ************************************ 00:35:03.407 END TEST nvmf_identify_passthru 00:35:03.407 ************************************ 00:35:03.407 20:05:12 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:03.407 20:05:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:03.407 20:05:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:03.407 20:05:12 -- common/autotest_common.sh@10 -- # set +x 00:35:03.407 ************************************ 00:35:03.407 START TEST nvmf_dif 00:35:03.407 ************************************ 00:35:03.407 20:05:12 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:03.407 * Looking for test storage... 00:35:03.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.407 20:05:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.407 20:05:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.407 20:05:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.407 20:05:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.407 20:05:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.407 20:05:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.407 20:05:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.407 20:05:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:03.407 20:05:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:03.407 20:05:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:03.407 20:05:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:03.407 20:05:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:03.407 20:05:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:03.407 20:05:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.407 20:05:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:03.407 20:05:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:03.407 20:05:12 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:03.407 20:05:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:05.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:05.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:05.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:05.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.310 20:05:14 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:05.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:35:05.311 00:35:05.311 --- 10.0.0.2 ping statistics --- 00:35:05.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.311 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:35:05.311 20:05:14 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:35:05.311 00:35:05.311 --- 10.0.0.1 ping statistics --- 00:35:05.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.311 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:35:05.311 20:05:14 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.311 20:05:14 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:05.311 20:05:14 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:05.311 20:05:14 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:06.247 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:06.247 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:06.247 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:06.247 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:06.247 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:06.247 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:06.247 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:06.247 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:06.247 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:06.247 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:06.247 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:06.247 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:06.247 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:06.247 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:06.247 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:06.247 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:06.247 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:06.506 20:05:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:06.506 20:05:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=4143555 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:06.506 20:05:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 4143555 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 4143555 ']' 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:06.506 20:05:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.506 [2024-07-25 20:05:15.832046] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:35:06.506 [2024-07-25 20:05:15.832143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.506 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.506 [2024-07-25 20:05:15.901272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.765 [2024-07-25 20:05:15.990774] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.765 [2024-07-25 20:05:15.990839] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.765 [2024-07-25 20:05:15.990857] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.765 [2024-07-25 20:05:15.990870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.765 [2024-07-25 20:05:15.990882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.765 [2024-07-25 20:05:15.990919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:06.765 20:05:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.765 20:05:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:06.765 20:05:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:06.765 20:05:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.765 [2024-07-25 20:05:16.142870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.765 20:05:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.765 20:05:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.765 ************************************ 00:35:06.765 START TEST fio_dif_1_default 00:35:06.765 ************************************ 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:06.765 bdev_null0 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.765 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:07.027 [2024-07-25 20:05:16.199177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:07.027 { 00:35:07.027 "params": { 00:35:07.027 "name": "Nvme$subsystem", 00:35:07.027 "trtype": "$TEST_TRANSPORT", 00:35:07.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.027 "adrfam": "ipv4", 00:35:07.027 "trsvcid": "$NVMF_PORT", 00:35:07.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.027 "hdgst": ${hdgst:-false}, 00:35:07.027 "ddgst": ${ddgst:-false} 00:35:07.027 }, 00:35:07.027 "method": "bdev_nvme_attach_controller" 00:35:07.027 } 00:35:07.027 EOF 00:35:07.027 )") 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:07.027 "params": { 00:35:07.027 "name": "Nvme0", 00:35:07.027 "trtype": "tcp", 00:35:07.027 "traddr": "10.0.0.2", 00:35:07.027 "adrfam": "ipv4", 00:35:07.027 "trsvcid": "4420", 00:35:07.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.027 "hdgst": false, 00:35:07.027 "ddgst": false 00:35:07.027 }, 00:35:07.027 "method": "bdev_nvme_attach_controller" 00:35:07.027 }' 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:07.027 20:05:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.286 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:07.286 fio-3.35 00:35:07.286 Starting 1 thread 00:35:07.286 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.525 00:35:19.525 filename0: (groupid=0, jobs=1): err= 0: pid=4143807: Thu Jul 25 20:05:27 2024 00:35:19.525 read: IOPS=95, BW=382KiB/s (392kB/s)(3840KiB/10041msec) 00:35:19.525 slat (nsec): min=4953, max=87520, avg=9207.82, stdev=3898.36 00:35:19.525 clat (usec): min=40914, max=47286, avg=41805.63, stdev=528.65 00:35:19.525 lat (usec): min=40922, max=47317, avg=41814.84, stdev=528.80 00:35:19.525 clat percentiles (usec): 00:35:19.525 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:35:19.525 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:19.525 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:19.525 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:35:19.525 | 99.99th=[47449] 00:35:19.525 bw ( KiB/s): min= 352, max= 384, per=99.89%, avg=382.40, stdev= 7.16, samples=20 00:35:19.525 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:35:19.525 lat (msec) : 50=100.00% 00:35:19.525 cpu : usr=89.58%, sys=9.93%, ctx=20, majf=0, minf=289 00:35:19.525 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.525 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.526 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:19.526 00:35:19.526 Run status group 0 (all jobs): 00:35:19.526 READ: bw=382KiB/s (392kB/s), 382KiB/s-382KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10041-10041msec 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 00:35:19.526 real 0m11.195s 00:35:19.526 user 0m10.222s 00:35:19.526 sys 0m1.305s 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 ************************************ 00:35:19.526 END TEST fio_dif_1_default 00:35:19.526 ************************************ 00:35:19.526 20:05:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:19.526 20:05:27 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:19.526 20:05:27 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 ************************************ 00:35:19.526 START TEST fio_dif_1_multi_subsystems 00:35:19.526 ************************************ 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 bdev_null0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 [2024-07-25 20:05:27.449856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 bdev_null1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:19.526 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:19.527 { 00:35:19.527 "params": { 00:35:19.527 "name": "Nvme$subsystem", 00:35:19.527 "trtype": "$TEST_TRANSPORT", 00:35:19.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.527 "adrfam": "ipv4", 00:35:19.527 "trsvcid": "$NVMF_PORT", 00:35:19.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.527 "hdgst": ${hdgst:-false}, 00:35:19.527 "ddgst": ${ddgst:-false} 00:35:19.527 }, 00:35:19.527 "method": "bdev_nvme_attach_controller" 00:35:19.527 } 00:35:19.527 EOF 00:35:19.527 )") 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:19.527 { 00:35:19.527 "params": { 00:35:19.527 "name": "Nvme$subsystem", 00:35:19.527 "trtype": "$TEST_TRANSPORT", 00:35:19.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.527 "adrfam": "ipv4", 00:35:19.527 "trsvcid": "$NVMF_PORT", 00:35:19.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.527 "hdgst": ${hdgst:-false}, 00:35:19.527 "ddgst": ${ddgst:-false} 00:35:19.527 }, 00:35:19.527 "method": "bdev_nvme_attach_controller" 00:35:19.527 } 00:35:19.527 EOF 00:35:19.527 )") 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:19.527 "params": { 00:35:19.527 "name": "Nvme0", 00:35:19.527 "trtype": "tcp", 00:35:19.527 "traddr": "10.0.0.2", 00:35:19.527 "adrfam": "ipv4", 00:35:19.527 "trsvcid": "4420", 00:35:19.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.527 "hdgst": false, 00:35:19.527 "ddgst": false 00:35:19.527 }, 00:35:19.527 "method": "bdev_nvme_attach_controller" 00:35:19.527 },{ 00:35:19.527 "params": { 00:35:19.527 "name": "Nvme1", 00:35:19.527 "trtype": "tcp", 00:35:19.527 "traddr": "10.0.0.2", 00:35:19.527 "adrfam": "ipv4", 00:35:19.527 "trsvcid": "4420", 00:35:19.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.527 "hdgst": false, 00:35:19.527 "ddgst": false 00:35:19.527 }, 00:35:19.527 "method": "bdev_nvme_attach_controller" 00:35:19.527 }' 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:19.527 20:05:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.527 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:19.527 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:19.527 fio-3.35 00:35:19.527 Starting 2 threads 00:35:19.527 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.493 00:35:29.493 filename0: (groupid=0, jobs=1): err= 0: pid=4145148: Thu Jul 25 20:05:38 2024 00:35:29.493 read: IOPS=143, BW=575KiB/s (589kB/s)(5760KiB/10011msec) 00:35:29.493 slat (nsec): min=5004, max=48665, avg=12033.24, stdev=5687.21 00:35:29.493 clat (usec): min=635, max=49289, avg=27769.99, stdev=19026.97 00:35:29.493 lat (usec): min=643, max=49305, avg=27782.02, stdev=19027.41 00:35:29.494 clat percentiles (usec): 00:35:29.494 | 1.00th=[ 660], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 701], 00:35:29.494 | 30.00th=[ 725], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:29.494 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:29.494 | 99.00th=[41681], 99.50th=[42206], 99.90th=[49021], 99.95th=[49546], 00:35:29.494 | 99.99th=[49546] 00:35:29.494 bw ( KiB/s): min= 384, max= 768, per=43.17%, avg=574.40, stdev=184.99, samples=20 00:35:29.494 iops : min= 96, max= 192, avg=143.60, stdev=46.25, samples=20 00:35:29.494 lat (usec) : 750=30.97%, 1000=1.11% 00:35:29.494 lat (msec) : 2=0.97%, 50=66.94% 00:35:29.494 cpu : usr=96.27%, sys=3.34%, ctx=56, majf=0, minf=105 00:35:29.494 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.494 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.494 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:29.494 filename1: (groupid=0, jobs=1): err= 0: pid=4145149: Thu Jul 25 20:05:38 2024 00:35:29.494 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10001msec) 00:35:29.494 slat (nsec): min=7186, max=99640, avg=10456.83, stdev=5410.61 00:35:29.494 clat (usec): min=634, max=47508, avg=21154.14, stdev=20318.89 00:35:29.494 lat (usec): min=641, max=47556, avg=21164.60, stdev=20318.21 00:35:29.494 clat percentiles (usec): 00:35:29.494 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 717], 00:35:29.494 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:35:29.494 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:29.494 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:35:29.494 | 99.99th=[47449] 00:35:29.494 bw ( KiB/s): min= 672, max= 768, per=56.85%, avg=756.21, stdev=28.64, samples=19 00:35:29.494 iops : min= 168, max= 192, avg=189.05, stdev= 7.16, samples=19 00:35:29.494 lat (usec) : 750=25.48%, 1000=23.89% 00:35:29.494 lat (msec) : 2=0.42%, 50=50.21% 00:35:29.494 cpu : usr=96.70%, sys=3.01%, ctx=16, majf=0, minf=197 00:35:29.494 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.494 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.494 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:29.494 00:35:29.494 Run status group 0 (all jobs): 00:35:29.494 READ: bw=1330KiB/s (1362kB/s), 575KiB/s-755KiB/s (589kB/s-773kB/s), io=13.0MiB (13.6MB), run=10001-10011msec 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 00:35:29.494 real 0m11.390s 00:35:29.494 user 0m20.675s 00:35:29.494 sys 0m0.911s 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 ************************************ 00:35:29.494 END TEST fio_dif_1_multi_subsystems 00:35:29.494 ************************************ 00:35:29.494 20:05:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:29.494 20:05:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:29.494 20:05:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 ************************************ 00:35:29.494 START TEST fio_dif_rand_params 00:35:29.494 ************************************ 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 bdev_null0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.494 [2024-07-25 20:05:38.878669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:29.494 { 00:35:29.494 "params": { 00:35:29.494 "name": "Nvme$subsystem", 00:35:29.494 "trtype": "$TEST_TRANSPORT", 00:35:29.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.494 "adrfam": "ipv4", 00:35:29.494 "trsvcid": "$NVMF_PORT", 00:35:29.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.494 "hdgst": ${hdgst:-false}, 00:35:29.494 "ddgst": ${ddgst:-false} 00:35:29.494 }, 00:35:29.494 "method": "bdev_nvme_attach_controller" 00:35:29.494 } 00:35:29.494 EOF 00:35:29.494 )") 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:29.494 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:29.495 "params": { 00:35:29.495 "name": "Nvme0", 00:35:29.495 "trtype": "tcp", 00:35:29.495 "traddr": "10.0.0.2", 00:35:29.495 "adrfam": "ipv4", 00:35:29.495 "trsvcid": "4420", 00:35:29.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.495 "hdgst": false, 00:35:29.495 "ddgst": false 00:35:29.495 }, 00:35:29.495 "method": "bdev_nvme_attach_controller" 00:35:29.495 }' 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:29.495 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:29.754 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:29.754 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:29.754 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.754 20:05:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.754 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:29.754 ... 00:35:29.754 fio-3.35 00:35:29.754 Starting 3 threads 00:35:29.754 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.311 00:35:36.311 filename0: (groupid=0, jobs=1): err= 0: pid=4146487: Thu Jul 25 20:05:44 2024 00:35:36.311 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5045msec) 00:35:36.311 slat (usec): min=4, max=126, avg=18.16, stdev= 6.35 00:35:36.311 clat (usec): min=4823, max=58901, avg=13913.54, stdev=8243.55 00:35:36.311 lat (usec): min=4836, max=58915, avg=13931.70, stdev=8243.87 00:35:36.311 clat percentiles (usec): 00:35:36.311 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 7635], 20.00th=[ 9241], 00:35:36.311 | 30.00th=[11076], 40.00th=[12125], 50.00th=[12780], 60.00th=[13698], 00:35:36.311 | 70.00th=[14877], 80.00th=[15795], 90.00th=[16909], 95.00th=[18482], 00:35:36.311 | 99.00th=[52691], 99.50th=[53740], 99.90th=[57410], 99.95th=[58983], 00:35:36.311 | 99.99th=[58983] 00:35:36.311 bw ( KiB/s): min=18944, max=33280, per=33.85%, avg=27653.50, stdev=3895.58, samples=10 00:35:36.311 iops : min= 148, max= 260, avg=216.00, stdev=30.43, samples=10 00:35:36.311 lat (msec) : 10=24.01%, 20=71.93%, 50=1.75%, 100=2.31% 00:35:36.311 cpu : usr=93.99%, sys=5.10%, ctx=20, majf=0, minf=182 00:35:36.311 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.311 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.311 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.311 filename0: (groupid=0, jobs=1): err= 0: pid=4146488: Thu Jul 25 20:05:44 2024 00:35:36.311 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5005msec) 00:35:36.311 slat (nsec): min=4880, max=60266, avg=19707.50, stdev=6771.42 00:35:36.311 clat (usec): min=4895, max=57990, avg=14150.76, stdev=7733.12 00:35:36.311 lat (usec): min=4913, max=58015, avg=14170.47, stdev=7733.24 00:35:36.311 clat percentiles (usec): 00:35:36.311 | 1.00th=[ 5866], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9896], 00:35:36.311 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13173], 60.00th=[13960], 00:35:36.311 | 70.00th=[14746], 80.00th=[15664], 90.00th=[17171], 95.00th=[18220], 00:35:36.311 | 99.00th=[51643], 99.50th=[54264], 99.90th=[56886], 99.95th=[57934], 00:35:36.311 | 99.99th=[57934] 00:35:36.311 bw ( KiB/s): min=23552, max=30464, per=33.10%, avg=27038.40, stdev=2661.05, samples=10 00:35:36.311 iops : min= 184, max= 238, avg=211.20, stdev=20.83, samples=10 00:35:36.311 lat (msec) : 10=20.59%, 20=75.73%, 50=1.51%, 100=2.17% 00:35:36.311 cpu : usr=89.31%, sys=7.17%, ctx=583, majf=0, minf=95 00:35:36.311 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.311 issued rwts: total=1059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.311 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.311 filename0: (groupid=0, jobs=1): err= 0: pid=4146489: Thu Jul 25 20:05:44 2024 00:35:36.311 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(135MiB/5006msec) 00:35:36.311 slat (nsec): min=4529, max=45264, avg=17508.46, stdev=4984.05 00:35:36.311 clat (usec): min=4501, max=54807, avg=13904.59, stdev=9450.28 00:35:36.311 lat (usec): min=4514, max=54822, avg=13922.10, stdev=9449.94 00:35:36.311 clat percentiles (usec): 00:35:36.311 | 1.00th=[ 5669], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[10290], 00:35:36.311 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:35:36.311 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14746], 95.00th=[49021], 00:35:36.311 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:35:36.311 | 99.99th=[54789] 00:35:36.311 bw ( KiB/s): min=22272, max=33280, per=33.72%, avg=27545.60, stdev=3483.44, samples=10 00:35:36.311 iops : min= 174, max= 260, avg=215.20, stdev=27.21, samples=10 00:35:36.311 lat (msec) : 10=18.27%, 20=75.79%, 50=2.23%, 100=3.71% 00:35:36.311 cpu : usr=94.77%, sys=4.66%, ctx=55, majf=0, minf=64 00:35:36.311 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.311 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.311 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.311 00:35:36.311 Run status group 0 (all jobs): 00:35:36.311 READ: bw=79.8MiB/s (83.7MB/s), 26.4MiB/s-26.9MiB/s (27.7MB/s-28.2MB/s), io=403MiB (422MB), run=5005-5045msec 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.311 bdev_null0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.311 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 [2024-07-25 20:05:44.909168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 bdev_null1 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 bdev_null2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.312 { 00:35:36.312 "params": { 00:35:36.312 "name": "Nvme$subsystem", 00:35:36.312 "trtype": "$TEST_TRANSPORT", 00:35:36.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.312 "adrfam": "ipv4", 00:35:36.312 "trsvcid": "$NVMF_PORT", 00:35:36.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.312 "hdgst": ${hdgst:-false}, 00:35:36.312 "ddgst": ${ddgst:-false} 00:35:36.312 }, 00:35:36.312 "method": "bdev_nvme_attach_controller" 00:35:36.312 } 00:35:36.312 EOF 00:35:36.312 )") 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.312 { 00:35:36.312 "params": { 00:35:36.312 "name": "Nvme$subsystem", 00:35:36.312 "trtype": "$TEST_TRANSPORT", 00:35:36.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.312 "adrfam": "ipv4", 00:35:36.312 "trsvcid": "$NVMF_PORT", 00:35:36.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.312 "hdgst": ${hdgst:-false}, 00:35:36.312 "ddgst": ${ddgst:-false} 00:35:36.312 }, 00:35:36.312 "method": "bdev_nvme_attach_controller" 00:35:36.312 } 00:35:36.312 EOF 00:35:36.312 )") 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.312 { 00:35:36.312 "params": { 00:35:36.312 "name": "Nvme$subsystem", 00:35:36.312 "trtype": "$TEST_TRANSPORT", 00:35:36.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.312 "adrfam": "ipv4", 00:35:36.312 "trsvcid": "$NVMF_PORT", 00:35:36.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.312 "hdgst": ${hdgst:-false}, 00:35:36.312 "ddgst": ${ddgst:-false} 00:35:36.312 }, 00:35:36.312 "method": "bdev_nvme_attach_controller" 00:35:36.312 } 00:35:36.312 EOF 00:35:36.312 )") 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:36.312 20:05:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:36.312 "params": { 00:35:36.312 "name": "Nvme0", 00:35:36.312 "trtype": "tcp", 00:35:36.312 "traddr": "10.0.0.2", 00:35:36.312 "adrfam": "ipv4", 00:35:36.312 "trsvcid": "4420", 00:35:36.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.312 "hdgst": false, 00:35:36.312 "ddgst": false 00:35:36.312 }, 00:35:36.312 "method": "bdev_nvme_attach_controller" 00:35:36.312 },{ 00:35:36.312 "params": { 00:35:36.312 "name": "Nvme1", 00:35:36.312 "trtype": "tcp", 00:35:36.312 "traddr": "10.0.0.2", 00:35:36.312 "adrfam": "ipv4", 00:35:36.312 "trsvcid": "4420", 00:35:36.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.313 "hdgst": false, 00:35:36.313 "ddgst": false 00:35:36.313 }, 00:35:36.313 "method": "bdev_nvme_attach_controller" 00:35:36.313 },{ 00:35:36.313 "params": { 00:35:36.313 "name": "Nvme2", 00:35:36.313 "trtype": "tcp", 00:35:36.313 "traddr": "10.0.0.2", 00:35:36.313 "adrfam": "ipv4", 00:35:36.313 "trsvcid": "4420", 00:35:36.313 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:36.313 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:36.313 "hdgst": false, 00:35:36.313 "ddgst": false 00:35:36.313 }, 00:35:36.313 "method": "bdev_nvme_attach_controller" 00:35:36.313 }' 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:36.313 20:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.313 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.313 ... 00:35:36.313 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.313 ... 00:35:36.313 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.313 ... 00:35:36.313 fio-3.35 00:35:36.313 Starting 24 threads 00:35:36.313 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.512 00:35:48.512 filename0: (groupid=0, jobs=1): err= 0: pid=4147348: Thu Jul 25 20:05:56 2024 00:35:48.512 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:35:48.512 slat (usec): min=10, max=113, avg=47.28, stdev=15.20 00:35:48.512 clat (usec): min=11958, max=57898, avg=32809.14, stdev=2123.56 00:35:48.512 lat (usec): min=12014, max=57920, avg=32856.42, stdev=2122.98 00:35:48.512 clat percentiles (usec): 00:35:48.512 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.512 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:48.512 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.512 | 99.00th=[38011], 99.50th=[43779], 99.90th=[57934], 99.95th=[57934], 00:35:48.512 | 99.99th=[57934] 00:35:48.512 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.512 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.512 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:48.512 cpu : usr=96.98%, sys=2.11%, ctx=160, majf=0, minf=27 00:35:48.512 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.512 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.512 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.512 filename0: (groupid=0, jobs=1): err= 0: pid=4147349: Thu Jul 25 20:05:56 2024 00:35:48.512 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10009msec) 00:35:48.512 slat (usec): min=8, max=107, avg=29.91, stdev=15.29 00:35:48.512 clat (usec): min=24040, max=46480, avg=33006.39, stdev=1308.06 00:35:48.512 lat (usec): min=24098, max=46533, avg=33036.30, stdev=1307.98 00:35:48.512 clat percentiles (usec): 00:35:48.512 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:48.512 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:48.512 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:48.512 | 99.00th=[36439], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:35:48.512 | 99.99th=[46400] 00:35:48.512 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=58.73, samples=20 00:35:48.512 iops : min= 448, max= 512, avg=480.00, stdev=14.68, samples=20 00:35:48.512 lat (msec) : 50=100.00% 00:35:48.512 cpu : usr=93.67%, sys=3.58%, ctx=382, majf=0, minf=47 00:35:48.512 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:48.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.512 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.512 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.512 filename0: (groupid=0, jobs=1): err= 0: pid=4147350: Thu Jul 25 20:05:56 2024 00:35:48.512 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10011msec) 00:35:48.512 slat (usec): min=14, max=154, avg=47.40, stdev=21.05 00:35:48.512 clat (usec): min=27902, max=44515, avg=32835.75, stdev=1216.96 00:35:48.512 lat (usec): min=27964, max=44543, avg=32883.15, stdev=1213.90 00:35:48.512 clat percentiles (usec): 00:35:48.512 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.512 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:48.512 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.512 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:35:48.512 | 99.99th=[44303] 00:35:48.512 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.16, stdev=59.99, samples=19 00:35:48.512 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:35:48.512 lat (msec) : 50=100.00% 00:35:48.512 cpu : usr=94.25%, sys=3.40%, ctx=249, majf=0, minf=25 00:35:48.512 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename0: (groupid=0, jobs=1): err= 0: pid=4147351: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:35:48.513 slat (usec): min=9, max=142, avg=38.77, stdev=16.63 00:35:48.513 clat (usec): min=23739, max=49705, avg=32943.60, stdev=1405.33 00:35:48.513 lat (usec): min=23749, max=49752, avg=32982.37, stdev=1403.34 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:48.513 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.513 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:35:48.513 | 99.00th=[38011], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:35:48.513 | 99.99th=[49546] 00:35:48.513 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.513 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.513 lat (msec) : 50=100.00% 00:35:48.513 cpu : usr=98.13%, sys=1.46%, ctx=15, majf=0, minf=27 00:35:48.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename0: (groupid=0, jobs=1): err= 0: pid=4147352: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:35:48.513 slat (usec): min=8, max=140, avg=30.93, stdev=23.11 00:35:48.513 clat (usec): min=25964, max=47593, avg=33032.03, stdev=1316.81 00:35:48.513 lat (usec): min=25978, max=47618, avg=33062.96, stdev=1316.12 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:48.513 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:48.513 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:48.513 | 99.00th=[38011], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:35:48.513 | 99.99th=[47449] 00:35:48.513 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=74.48, samples=19 00:35:48.513 iops : min= 416, max= 512, avg=480.00, stdev=18.62, samples=19 00:35:48.513 lat (msec) : 50=100.00% 00:35:48.513 cpu : usr=98.17%, sys=1.41%, ctx=20, majf=0, minf=38 00:35:48.513 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename0: (groupid=0, jobs=1): err= 0: pid=4147353: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:35:48.513 slat (usec): min=8, max=162, avg=59.44, stdev=31.18 00:35:48.513 clat (usec): min=28072, max=49298, avg=32762.30, stdev=1360.81 00:35:48.513 lat (usec): min=28170, max=49339, avg=32821.74, stdev=1354.09 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:48.513 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.513 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.513 | 99.00th=[36963], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:35:48.513 | 99.99th=[49546] 00:35:48.513 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.513 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.513 lat (msec) : 50=100.00% 00:35:48.513 cpu : usr=94.63%, sys=3.11%, ctx=196, majf=0, minf=26 00:35:48.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename0: (groupid=0, jobs=1): err= 0: pid=4147354: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=481, BW=1926KiB/s (1973kB/s)(18.8MiB/10013msec) 00:35:48.513 slat (usec): min=7, max=103, avg=23.50, stdev=18.52 00:35:48.513 clat (usec): min=9493, max=55642, avg=33014.87, stdev=2233.30 00:35:48.513 lat (usec): min=9505, max=55652, avg=33038.37, stdev=2233.53 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[30802], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:48.513 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:48.513 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:48.513 | 99.00th=[36439], 99.50th=[47449], 99.90th=[55313], 99.95th=[55313], 00:35:48.513 | 99.99th=[55837] 00:35:48.513 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1922.53, stdev=96.19, samples=19 00:35:48.513 iops : min= 416, max= 512, avg=480.63, stdev=24.05, samples=19 00:35:48.513 lat (msec) : 10=0.19%, 20=0.19%, 50=99.38%, 100=0.25% 00:35:48.513 cpu : usr=97.96%, sys=1.63%, ctx=20, majf=0, minf=45 00:35:48.513 IO depths : 1=5.6%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename0: (groupid=0, jobs=1): err= 0: pid=4147355: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10013msec) 00:35:48.513 slat (usec): min=8, max=438, avg=46.11, stdev=25.51 00:35:48.513 clat (usec): min=13581, max=57338, avg=32703.33, stdev=1623.44 00:35:48.513 lat (usec): min=13589, max=57370, avg=32749.44, stdev=1623.94 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.513 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:48.513 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.513 | 99.00th=[35914], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:35:48.513 | 99.99th=[57410] 00:35:48.513 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1926.74, stdev=51.80, samples=19 00:35:48.513 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:35:48.513 lat (msec) : 20=0.37%, 50=99.59%, 100=0.04% 00:35:48.513 cpu : usr=97.58%, sys=1.59%, ctx=66, majf=0, minf=27 00:35:48.513 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename1: (groupid=0, jobs=1): err= 0: pid=4147356: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10007msec) 00:35:48.513 slat (usec): min=9, max=121, avg=47.59, stdev=19.17 00:35:48.513 clat (usec): min=13790, max=59576, avg=32780.30, stdev=2121.51 00:35:48.513 lat (usec): min=13812, max=59615, avg=32827.88, stdev=2121.79 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.513 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:48.513 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:35:48.513 | 99.00th=[36439], 99.50th=[43779], 99.90th=[59507], 99.95th=[59507], 00:35:48.513 | 99.99th=[59507] 00:35:48.513 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.513 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.513 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:48.513 cpu : usr=95.59%, sys=2.68%, ctx=231, majf=0, minf=31 00:35:48.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename1: (groupid=0, jobs=1): err= 0: pid=4147357: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:35:48.513 slat (usec): min=8, max=201, avg=56.13, stdev=27.48 00:35:48.513 clat (usec): min=24157, max=47523, avg=32746.98, stdev=1377.47 00:35:48.513 lat (usec): min=24237, max=47547, avg=32803.12, stdev=1374.09 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:48.513 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:48.513 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.513 | 99.00th=[38536], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:35:48.513 | 99.99th=[47449] 00:35:48.513 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.513 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.513 lat (msec) : 50=100.00% 00:35:48.513 cpu : usr=97.35%, sys=1.74%, ctx=107, majf=0, minf=34 00:35:48.513 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.513 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.513 filename1: (groupid=0, jobs=1): err= 0: pid=4147358: Thu Jul 25 20:05:56 2024 00:35:48.513 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:35:48.513 slat (usec): min=8, max=140, avg=34.87, stdev=20.65 00:35:48.513 clat (usec): min=25052, max=47600, avg=32983.89, stdev=1302.77 00:35:48.513 lat (usec): min=25067, max=47627, avg=33018.76, stdev=1302.18 00:35:48.513 clat percentiles (usec): 00:35:48.513 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:48.514 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.514 | 99.00th=[38536], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:35:48.514 | 99.99th=[47449] 00:35:48.514 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.514 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.514 lat (msec) : 50=100.00% 00:35:48.514 cpu : usr=98.25%, sys=1.35%, ctx=21, majf=0, minf=26 00:35:48.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename1: (groupid=0, jobs=1): err= 0: pid=4147359: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10009msec) 00:35:48.514 slat (usec): min=8, max=214, avg=47.91, stdev=17.00 00:35:48.514 clat (usec): min=13828, max=58466, avg=32806.00, stdev=2149.56 00:35:48.514 lat (usec): min=13847, max=58507, avg=32853.91, stdev=2149.42 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.514 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.514 | 99.00th=[38536], 99.50th=[43254], 99.90th=[58459], 99.95th=[58459], 00:35:48.514 | 99.99th=[58459] 00:35:48.514 bw ( KiB/s): min= 1667, max= 2048, per=4.16%, avg=1920.16, stdev=84.83, samples=19 00:35:48.514 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:35:48.514 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:48.514 cpu : usr=95.76%, sys=2.59%, ctx=215, majf=0, minf=30 00:35:48.514 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename1: (groupid=0, jobs=1): err= 0: pid=4147360: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10013msec) 00:35:48.514 slat (usec): min=13, max=146, avg=46.06, stdev=17.75 00:35:48.514 clat (usec): min=13838, max=43912, avg=32723.41, stdev=1472.52 00:35:48.514 lat (usec): min=13872, max=43960, avg=32769.48, stdev=1471.86 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.514 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.514 | 99.00th=[35914], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:35:48.514 | 99.99th=[43779] 00:35:48.514 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1926.74, stdev=51.80, samples=19 00:35:48.514 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:35:48.514 lat (msec) : 20=0.33%, 50=99.67% 00:35:48.514 cpu : usr=97.41%, sys=2.18%, ctx=25, majf=0, minf=29 00:35:48.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename1: (groupid=0, jobs=1): err= 0: pid=4147361: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10013msec) 00:35:48.514 slat (usec): min=8, max=134, avg=32.00, stdev=14.27 00:35:48.514 clat (usec): min=27118, max=49276, avg=32977.14, stdev=1306.31 00:35:48.514 lat (usec): min=27126, max=49300, avg=33009.14, stdev=1306.64 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:35:48.514 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.514 | 99.00th=[36963], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:35:48.514 | 99.99th=[49021] 00:35:48.514 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.514 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.514 lat (msec) : 50=100.00% 00:35:48.514 cpu : usr=98.12%, sys=1.47%, ctx=19, majf=0, minf=30 00:35:48.514 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename1: (groupid=0, jobs=1): err= 0: pid=4147362: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10011msec) 00:35:48.514 slat (usec): min=10, max=154, avg=44.89, stdev=19.71 00:35:48.514 clat (usec): min=27848, max=45009, avg=32890.47, stdev=1221.86 00:35:48.514 lat (usec): min=27907, max=45031, avg=32935.36, stdev=1217.69 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.514 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.514 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:35:48.514 | 99.99th=[44827] 00:35:48.514 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=60.34, samples=19 00:35:48.514 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:35:48.514 lat (msec) : 50=100.00% 00:35:48.514 cpu : usr=98.11%, sys=1.49%, ctx=29, majf=0, minf=32 00:35:48.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename1: (groupid=0, jobs=1): err= 0: pid=4147363: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=481, BW=1924KiB/s (1971kB/s)(18.8MiB/10010msec) 00:35:48.514 slat (usec): min=6, max=135, avg=37.14, stdev=15.84 00:35:48.514 clat (usec): min=23613, max=47763, avg=32929.41, stdev=1336.73 00:35:48.514 lat (usec): min=23662, max=47816, avg=32966.54, stdev=1335.86 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:48.514 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.514 | 99.00th=[36963], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:35:48.514 | 99.99th=[47973] 00:35:48.514 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=58.73, samples=20 00:35:48.514 iops : min= 448, max= 512, avg=480.00, stdev=14.68, samples=20 00:35:48.514 lat (msec) : 50=100.00% 00:35:48.514 cpu : usr=98.01%, sys=1.59%, ctx=23, majf=0, minf=32 00:35:48.514 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename2: (groupid=0, jobs=1): err= 0: pid=4147364: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:35:48.514 slat (usec): min=8, max=162, avg=42.00, stdev=18.16 00:35:48.514 clat (usec): min=23664, max=49311, avg=32942.85, stdev=1548.39 00:35:48.514 lat (usec): min=23675, max=49335, avg=32984.86, stdev=1547.63 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:48.514 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.514 | 99.00th=[41157], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:35:48.514 | 99.99th=[49546] 00:35:48.514 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=74.09, samples=19 00:35:48.514 iops : min= 416, max= 512, avg=480.00, stdev=18.52, samples=19 00:35:48.514 lat (msec) : 50=100.00% 00:35:48.514 cpu : usr=96.08%, sys=2.54%, ctx=258, majf=0, minf=35 00:35:48.514 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.514 filename2: (groupid=0, jobs=1): err= 0: pid=4147365: Thu Jul 25 20:05:56 2024 00:35:48.514 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10015msec) 00:35:48.514 slat (usec): min=8, max=129, avg=35.25, stdev=11.69 00:35:48.514 clat (usec): min=27164, max=49271, avg=32947.52, stdev=1296.63 00:35:48.514 lat (usec): min=27192, max=49324, avg=32982.77, stdev=1297.74 00:35:48.514 clat percentiles (usec): 00:35:48.514 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:35:48.514 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.514 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.514 | 99.00th=[36963], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:35:48.514 | 99.99th=[49021] 00:35:48.514 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.514 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.514 lat (msec) : 50=100.00% 00:35:48.514 cpu : usr=98.10%, sys=1.45%, ctx=27, majf=0, minf=36 00:35:48.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 filename2: (groupid=0, jobs=1): err= 0: pid=4147366: Thu Jul 25 20:05:56 2024 00:35:48.515 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:35:48.515 slat (usec): min=8, max=174, avg=49.14, stdev=19.21 00:35:48.515 clat (usec): min=13814, max=57808, avg=32815.54, stdev=2117.53 00:35:48.515 lat (usec): min=13855, max=57835, avg=32864.68, stdev=2118.16 00:35:48.515 clat percentiles (usec): 00:35:48.515 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.515 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:48.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.515 | 99.00th=[38011], 99.50th=[43779], 99.90th=[57934], 99.95th=[57934], 00:35:48.515 | 99.99th=[57934] 00:35:48.515 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=72.74, samples=19 00:35:48.515 iops : min= 416, max= 512, avg=480.00, stdev=18.18, samples=19 00:35:48.515 lat (msec) : 20=0.33%, 50=99.29%, 100=0.37% 00:35:48.515 cpu : usr=98.21%, sys=1.38%, ctx=28, majf=0, minf=37 00:35:48.515 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 filename2: (groupid=0, jobs=1): err= 0: pid=4147367: Thu Jul 25 20:05:56 2024 00:35:48.515 read: IOPS=481, BW=1925KiB/s (1972kB/s)(18.8MiB/10005msec) 00:35:48.515 slat (usec): min=8, max=128, avg=42.56, stdev=18.51 00:35:48.515 clat (usec): min=12550, max=57680, avg=32868.14, stdev=1847.72 00:35:48.515 lat (usec): min=12559, max=57726, avg=32910.70, stdev=1846.61 00:35:48.515 clat percentiles (usec): 00:35:48.515 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.515 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:35:48.515 | 99.00th=[38536], 99.50th=[43779], 99.90th=[54264], 99.95th=[56886], 00:35:48.515 | 99.99th=[57934] 00:35:48.515 bw ( KiB/s): min= 1776, max= 2048, per=4.16%, avg=1920.00, stdev=45.57, samples=19 00:35:48.515 iops : min= 444, max= 512, avg=480.00, stdev=11.39, samples=19 00:35:48.515 lat (msec) : 20=0.29%, 50=99.42%, 100=0.29% 00:35:48.515 cpu : usr=97.92%, sys=1.68%, ctx=23, majf=0, minf=27 00:35:48.515 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 filename2: (groupid=0, jobs=1): err= 0: pid=4147368: Thu Jul 25 20:05:56 2024 00:35:48.515 read: IOPS=481, BW=1924KiB/s (1971kB/s)(18.8MiB/10010msec) 00:35:48.515 slat (usec): min=8, max=173, avg=35.17, stdev=15.72 00:35:48.515 clat (usec): min=24362, max=46756, avg=32947.81, stdev=1324.97 00:35:48.515 lat (usec): min=24411, max=46808, avg=32982.98, stdev=1324.36 00:35:48.515 clat percentiles (usec): 00:35:48.515 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:48.515 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:48.515 | 99.00th=[36963], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:35:48.515 | 99.99th=[46924] 00:35:48.515 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=58.73, samples=20 00:35:48.515 iops : min= 448, max= 512, avg=480.00, stdev=14.68, samples=20 00:35:48.515 lat (msec) : 50=100.00% 00:35:48.515 cpu : usr=98.05%, sys=1.51%, ctx=18, majf=0, minf=55 00:35:48.515 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 filename2: (groupid=0, jobs=1): err= 0: pid=4147369: Thu Jul 25 20:05:56 2024 00:35:48.515 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:35:48.515 slat (usec): min=11, max=122, avg=48.72, stdev=17.58 00:35:48.515 clat (usec): min=13741, max=57922, avg=32785.24, stdev=2063.36 00:35:48.515 lat (usec): min=13785, max=57944, avg=32833.96, stdev=2062.68 00:35:48.515 clat percentiles (usec): 00:35:48.515 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:48.515 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:48.515 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:35:48.515 | 99.00th=[36963], 99.50th=[43254], 99.90th=[57934], 99.95th=[57934], 00:35:48.515 | 99.99th=[57934] 00:35:48.515 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:48.515 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:48.515 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:48.515 cpu : usr=96.30%, sys=2.35%, ctx=158, majf=0, minf=36 00:35:48.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 filename2: (groupid=0, jobs=1): err= 0: pid=4147370: Thu Jul 25 20:05:56 2024 00:35:48.515 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.8MiB/10008msec) 00:35:48.515 slat (usec): min=7, max=123, avg=48.93, stdev=21.19 00:35:48.515 clat (usec): min=11770, max=49393, avg=32768.20, stdev=2072.28 00:35:48.515 lat (usec): min=11803, max=49428, avg=32817.14, stdev=2071.92 00:35:48.515 clat percentiles (usec): 00:35:48.515 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:48.515 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:48.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.515 | 99.00th=[38536], 99.50th=[43779], 99.90th=[49021], 99.95th=[49546], 00:35:48.515 | 99.99th=[49546] 00:35:48.515 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1922.53, stdev=86.04, samples=19 00:35:48.515 iops : min= 416, max= 512, avg=480.63, stdev=21.51, samples=19 00:35:48.515 lat (msec) : 20=0.46%, 50=99.54% 00:35:48.515 cpu : usr=98.17%, sys=1.41%, ctx=17, majf=0, minf=32 00:35:48.515 IO depths : 1=5.2%, 2=11.5%, 4=24.9%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 filename2: (groupid=0, jobs=1): err= 0: pid=4147371: Thu Jul 25 20:05:56 2024 00:35:48.515 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10004msec) 00:35:48.515 slat (usec): min=5, max=112, avg=34.07, stdev=11.39 00:35:48.515 clat (usec): min=24313, max=43426, avg=32932.51, stdev=1134.08 00:35:48.515 lat (usec): min=24367, max=43462, avg=32966.58, stdev=1134.25 00:35:48.515 clat percentiles (usec): 00:35:48.515 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:35:48.515 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:48.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:48.515 | 99.00th=[36963], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:48.515 | 99.99th=[43254] 00:35:48.515 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1926.74, stdev=67.11, samples=19 00:35:48.515 iops : min= 448, max= 512, avg=481.68, stdev=16.78, samples=19 00:35:48.515 lat (msec) : 50=100.00% 00:35:48.515 cpu : usr=95.59%, sys=2.65%, ctx=774, majf=0, minf=40 00:35:48.515 IO depths : 1=5.9%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.515 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.515 00:35:48.515 Run status group 0 (all jobs): 00:35:48.515 READ: bw=45.1MiB/s (47.3MB/s), 1923KiB/s-1930KiB/s (1969kB/s-1977kB/s), io=452MiB (474MB), run=10004-10016msec 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.515 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 bdev_null0 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 [2024-07-25 20:05:56.538306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 bdev_null1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.516 { 00:35:48.516 "params": { 00:35:48.516 "name": "Nvme$subsystem", 00:35:48.516 "trtype": "$TEST_TRANSPORT", 00:35:48.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.516 "adrfam": "ipv4", 00:35:48.516 "trsvcid": "$NVMF_PORT", 00:35:48.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.516 "hdgst": ${hdgst:-false}, 00:35:48.516 "ddgst": ${ddgst:-false} 00:35:48.516 }, 00:35:48.516 "method": "bdev_nvme_attach_controller" 00:35:48.516 } 00:35:48.516 EOF 00:35:48.516 )") 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.516 { 00:35:48.516 "params": { 00:35:48.516 "name": "Nvme$subsystem", 00:35:48.516 "trtype": "$TEST_TRANSPORT", 00:35:48.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.516 "adrfam": "ipv4", 00:35:48.516 "trsvcid": "$NVMF_PORT", 00:35:48.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.516 "hdgst": ${hdgst:-false}, 00:35:48.516 "ddgst": ${ddgst:-false} 00:35:48.516 }, 00:35:48.516 "method": "bdev_nvme_attach_controller" 00:35:48.516 } 00:35:48.516 EOF 00:35:48.516 )") 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:48.516 20:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:48.516 "params": { 00:35:48.516 "name": "Nvme0", 00:35:48.516 "trtype": "tcp", 00:35:48.516 "traddr": "10.0.0.2", 00:35:48.516 "adrfam": "ipv4", 00:35:48.516 "trsvcid": "4420", 00:35:48.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.516 "hdgst": false, 00:35:48.516 "ddgst": false 00:35:48.516 }, 00:35:48.516 "method": "bdev_nvme_attach_controller" 00:35:48.517 },{ 00:35:48.517 "params": { 00:35:48.517 "name": "Nvme1", 00:35:48.517 "trtype": "tcp", 00:35:48.517 "traddr": "10.0.0.2", 00:35:48.517 "adrfam": "ipv4", 00:35:48.517 "trsvcid": "4420", 00:35:48.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:48.517 "hdgst": false, 00:35:48.517 "ddgst": false 00:35:48.517 }, 00:35:48.517 "method": "bdev_nvme_attach_controller" 00:35:48.517 }' 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.517 20:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.517 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:48.517 ... 00:35:48.517 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:48.517 ... 00:35:48.517 fio-3.35 00:35:48.517 Starting 4 threads 00:35:48.517 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.830 00:35:53.830 filename0: (groupid=0, jobs=1): err= 0: pid=4148749: Thu Jul 25 20:06:02 2024 00:35:53.830 read: IOPS=1976, BW=15.4MiB/s (16.2MB/s)(77.2MiB/5001msec) 00:35:53.830 slat (nsec): min=3791, max=35558, avg=13411.85, stdev=4273.13 00:35:53.830 clat (usec): min=965, max=9927, avg=3999.63, stdev=578.26 00:35:53.830 lat (usec): min=979, max=9940, avg=4013.04, stdev=578.31 00:35:53.830 clat percentiles (usec): 00:35:53.830 | 1.00th=[ 2507], 5.00th=[ 3130], 10.00th=[ 3392], 20.00th=[ 3687], 00:35:53.830 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:35:53.830 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 4948], 00:35:53.830 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7242], 99.95th=[ 7308], 00:35:53.830 | 99.99th=[ 9896] 00:35:53.830 bw ( KiB/s): min=15360, max=16576, per=25.08%, avg=15747.56, stdev=348.14, samples=9 00:35:53.830 iops : min= 1920, max= 2072, avg=1968.44, stdev=43.52, samples=9 00:35:53.830 lat (usec) : 1000=0.01% 00:35:53.830 lat (msec) : 2=0.27%, 4=48.81%, 10=50.91% 00:35:53.830 cpu : usr=93.58%, sys=5.68%, ctx=20, majf=0, minf=9 00:35:53.830 IO depths : 1=0.2%, 2=13.1%, 4=58.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 issued rwts: total=9886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.830 filename0: (groupid=0, jobs=1): err= 0: pid=4148750: Thu Jul 25 20:06:02 2024 00:35:53.830 read: IOPS=2033, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5003msec) 00:35:53.830 slat (nsec): min=3699, max=31828, avg=11983.70, stdev=3679.19 00:35:53.830 clat (usec): min=1124, max=9525, avg=3893.38, stdev=578.39 00:35:53.830 lat (usec): min=1137, max=9540, avg=3905.36, stdev=578.43 00:35:53.830 clat percentiles (usec): 00:35:53.830 | 1.00th=[ 2507], 5.00th=[ 2999], 10.00th=[ 3228], 20.00th=[ 3490], 00:35:53.830 | 30.00th=[ 3720], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:35:53.830 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4686], 00:35:53.830 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 7308], 99.95th=[ 8979], 00:35:53.830 | 99.99th=[ 9110] 00:35:53.830 bw ( KiB/s): min=15712, max=17152, per=25.91%, avg=16268.80, stdev=488.51, samples=10 00:35:53.830 iops : min= 1964, max= 2144, avg=2033.60, stdev=61.06, samples=10 00:35:53.830 lat (msec) : 2=0.33%, 4=51.89%, 10=47.78% 00:35:53.830 cpu : usr=92.64%, sys=6.66%, ctx=23, majf=0, minf=0 00:35:53.830 IO depths : 1=0.2%, 2=9.6%, 4=62.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 issued rwts: total=10176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.830 filename1: (groupid=0, jobs=1): err= 0: pid=4148751: Thu Jul 25 20:06:02 2024 00:35:53.830 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5001msec) 00:35:53.830 slat (nsec): min=3725, max=31379, avg=12463.85, stdev=4119.83 00:35:53.830 clat (usec): min=870, max=9866, avg=4174.13, stdev=628.04 00:35:53.830 lat (usec): min=883, max=9878, avg=4186.59, stdev=627.60 00:35:53.830 clat percentiles (usec): 00:35:53.830 | 1.00th=[ 2835], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3916], 00:35:53.830 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:35:53.830 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5407], 00:35:53.830 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7111], 99.95th=[ 7242], 00:35:53.830 | 99.99th=[ 9896] 00:35:53.830 bw ( KiB/s): min=14480, max=15616, per=24.11%, avg=15139.22, stdev=449.70, samples=9 00:35:53.830 iops : min= 1810, max= 1952, avg=1892.33, stdev=56.30, samples=9 00:35:53.830 lat (usec) : 1000=0.02% 00:35:53.830 lat (msec) : 2=0.30%, 4=34.99%, 10=64.69% 00:35:53.830 cpu : usr=94.12%, sys=5.30%, ctx=9, majf=0, minf=9 00:35:53.830 IO depths : 1=0.1%, 2=8.5%, 4=62.4%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 issued rwts: total=9490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.830 filename1: (groupid=0, jobs=1): err= 0: pid=4148752: Thu Jul 25 20:06:02 2024 00:35:53.830 read: IOPS=1942, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5003msec) 00:35:53.830 slat (nsec): min=3799, max=33808, avg=12645.94, stdev=4007.70 00:35:53.830 clat (usec): min=891, max=10596, avg=4074.11, stdev=642.46 00:35:53.830 lat (usec): min=904, max=10618, avg=4086.76, stdev=642.25 00:35:53.830 clat percentiles (usec): 00:35:53.830 | 1.00th=[ 2671], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3752], 00:35:53.830 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:35:53.830 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4817], 95.00th=[ 5342], 00:35:53.830 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 8455], 00:35:53.830 | 99.99th=[10552] 00:35:53.830 bw ( KiB/s): min=14656, max=15920, per=24.75%, avg=15539.10, stdev=370.93, samples=10 00:35:53.830 iops : min= 1832, max= 1990, avg=1942.30, stdev=46.35, samples=10 00:35:53.830 lat (usec) : 1000=0.02% 00:35:53.830 lat (msec) : 2=0.41%, 4=43.34%, 10=56.22%, 20=0.01% 00:35:53.830 cpu : usr=93.46%, sys=5.88%, ctx=16, majf=0, minf=0 00:35:53.830 IO depths : 1=0.1%, 2=12.4%, 4=60.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.830 issued rwts: total=9718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.830 00:35:53.830 Run status group 0 (all jobs): 00:35:53.830 READ: bw=61.3MiB/s (64.3MB/s), 14.8MiB/s-15.9MiB/s (15.5MB/s-16.7MB/s), io=307MiB (322MB), run=5001-5003msec 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.830 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.830 00:35:53.830 real 0m24.115s 00:35:53.830 user 4m30.347s 00:35:53.830 sys 0m7.830s 00:35:53.831 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:53.831 20:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.831 ************************************ 00:35:53.831 END TEST fio_dif_rand_params 00:35:53.831 ************************************ 00:35:53.831 20:06:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:53.831 20:06:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:53.831 20:06:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:53.831 20:06:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.831 ************************************ 00:35:53.831 START TEST fio_dif_digest 00:35:53.831 ************************************ 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.831 bdev_null0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.831 [2024-07-25 20:06:03.049022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.831 { 00:35:53.831 "params": { 00:35:53.831 "name": "Nvme$subsystem", 00:35:53.831 "trtype": "$TEST_TRANSPORT", 00:35:53.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.831 "adrfam": "ipv4", 00:35:53.831 "trsvcid": "$NVMF_PORT", 00:35:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.831 "hdgst": ${hdgst:-false}, 00:35:53.831 "ddgst": ${ddgst:-false} 00:35:53.831 }, 00:35:53.831 "method": "bdev_nvme_attach_controller" 00:35:53.831 } 00:35:53.831 EOF 00:35:53.831 )") 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:53.831 "params": { 00:35:53.831 "name": "Nvme0", 00:35:53.831 "trtype": "tcp", 00:35:53.831 "traddr": "10.0.0.2", 00:35:53.831 "adrfam": "ipv4", 00:35:53.831 "trsvcid": "4420", 00:35:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.831 "hdgst": true, 00:35:53.831 "ddgst": true 00:35:53.831 }, 00:35:53.831 "method": "bdev_nvme_attach_controller" 00:35:53.831 }' 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:53.831 20:06:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.089 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:54.089 ... 00:35:54.089 fio-3.35 00:35:54.089 Starting 3 threads 00:35:54.089 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.269 00:36:06.269 filename0: (groupid=0, jobs=1): err= 0: pid=4149615: Thu Jul 25 20:06:13 2024 00:36:06.269 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(262MiB/10044msec) 00:36:06.269 slat (nsec): min=4334, max=69957, avg=18547.54, stdev=4735.91 00:36:06.269 clat (usec): min=8024, max=54655, avg=14327.62, stdev=1590.94 00:36:06.269 lat (usec): min=8043, max=54663, avg=14346.17, stdev=1590.83 00:36:06.269 clat percentiles (usec): 00:36:06.269 | 1.00th=[11731], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:36:06.269 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:36:06.269 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:36:06.269 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21890], 99.95th=[52691], 00:36:06.269 | 99.99th=[54789] 00:36:06.269 bw ( KiB/s): min=25907, max=28416, per=33.78%, avg=26818.55, stdev=532.78, samples=20 00:36:06.269 iops : min= 202, max= 222, avg=209.50, stdev= 4.20, samples=20 00:36:06.269 lat (msec) : 10=0.19%, 20=99.67%, 50=0.05%, 100=0.10% 00:36:06.269 cpu : usr=93.30%, sys=6.21%, ctx=41, majf=0, minf=148 00:36:06.269 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.269 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:06.269 filename0: (groupid=0, jobs=1): err= 0: pid=4149616: Thu Jul 25 20:06:13 2024 00:36:06.269 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10045msec) 00:36:06.269 slat (nsec): min=4170, max=41088, avg=16595.82, stdev=4151.14 00:36:06.269 clat (usec): min=11197, max=56673, avg=14957.60, stdev=2155.91 00:36:06.269 lat (usec): min=11212, max=56695, avg=14974.19, stdev=2155.98 00:36:06.269 clat percentiles (usec): 00:36:06.269 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:36:06.269 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:36:06.269 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:36:06.269 | 99.00th=[17433], 99.50th=[17957], 99.90th=[55313], 99.95th=[55313], 00:36:06.269 | 99.99th=[56886] 00:36:06.269 bw ( KiB/s): min=23040, max=26880, per=32.36%, avg=25689.60, stdev=780.36, samples=20 00:36:06.269 iops : min= 180, max= 210, avg=200.70, stdev= 6.10, samples=20 00:36:06.269 lat (msec) : 20=99.75%, 50=0.05%, 100=0.20% 00:36:06.269 cpu : usr=94.62%, sys=4.91%, ctx=24, majf=0, minf=89 00:36:06.269 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.269 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:06.269 filename0: (groupid=0, jobs=1): err= 0: pid=4149617: Thu Jul 25 20:06:13 2024 00:36:06.269 read: IOPS=211, BW=26.5MiB/s (27.7MB/s)(266MiB/10047msec) 00:36:06.269 slat (nsec): min=4575, max=51281, avg=19580.89, stdev=5160.93 00:36:06.269 clat (usec): min=8888, max=51498, avg=14135.00, stdev=1481.37 00:36:06.269 lat (usec): min=8902, max=51513, avg=14154.58, stdev=1481.36 00:36:06.269 clat percentiles (usec): 00:36:06.269 | 1.00th=[11863], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:36:06.269 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:36:06.269 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:36:06.269 | 99.00th=[16581], 99.50th=[16909], 99.90th=[22414], 99.95th=[49021], 00:36:06.269 | 99.99th=[51643] 00:36:06.269 bw ( KiB/s): min=26368, max=28416, per=34.23%, avg=27174.40, stdev=493.30, samples=20 00:36:06.269 iops : min= 206, max= 222, avg=212.30, stdev= 3.85, samples=20 00:36:06.269 lat (msec) : 10=0.38%, 20=99.48%, 50=0.09%, 100=0.05% 00:36:06.269 cpu : usr=92.51%, sys=6.20%, ctx=412, majf=0, minf=191 00:36:06.269 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.269 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:06.269 00:36:06.269 Run status group 0 (all jobs): 00:36:06.269 READ: bw=77.5MiB/s (81.3MB/s), 25.0MiB/s-26.5MiB/s (26.2MB/s-27.7MB/s), io=779MiB (817MB), run=10044-10047msec 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.269 00:36:06.269 real 0m11.161s 00:36:06.269 user 0m29.341s 00:36:06.269 sys 0m2.050s 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:06.269 20:06:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.269 ************************************ 00:36:06.269 END TEST fio_dif_digest 00:36:06.269 ************************************ 00:36:06.269 20:06:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:06.269 20:06:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:06.269 rmmod nvme_tcp 00:36:06.269 rmmod nvme_fabrics 00:36:06.269 rmmod nvme_keyring 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 4143555 ']' 00:36:06.269 20:06:14 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 4143555 00:36:06.269 20:06:14 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 4143555 ']' 00:36:06.269 20:06:14 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 4143555 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4143555 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4143555' 00:36:06.270 killing process with pid 4143555 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@965 -- # kill 4143555 00:36:06.270 20:06:14 nvmf_dif -- common/autotest_common.sh@970 -- # wait 4143555 00:36:06.270 20:06:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:06.270 20:06:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.270 Waiting for block devices as requested 00:36:06.270 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:06.270 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:06.529 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:06.529 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:06.529 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:06.788 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:06.788 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:06.788 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:06.788 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:06.788 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.047 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.047 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.047 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.304 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.304 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.304 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.304 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.563 20:06:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:07.563 20:06:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:07.563 20:06:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:07.563 20:06:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:07.563 20:06:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.563 20:06:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:07.563 20:06:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.094 20:06:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:10.094 00:36:10.094 real 1m6.342s 00:36:10.094 user 6m27.251s 00:36:10.094 sys 0m18.654s 00:36:10.094 20:06:18 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:10.094 20:06:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.094 ************************************ 00:36:10.094 END TEST nvmf_dif 00:36:10.094 ************************************ 00:36:10.094 20:06:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:10.094 20:06:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:10.094 20:06:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:10.094 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:36:10.094 ************************************ 00:36:10.094 START TEST nvmf_abort_qd_sizes 00:36:10.094 ************************************ 00:36:10.094 20:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:10.094 * Looking for test storage... 00:36:10.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.094 20:06:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:10.094 20:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:11.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:11.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.469 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:11.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:11.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:11.470 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:11.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:36:11.729 00:36:11.729 --- 10.0.0.2 ping statistics --- 00:36:11.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.729 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:11.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:36:11.729 00:36:11.729 --- 10.0.0.1 ping statistics --- 00:36:11.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.729 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:11.729 20:06:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:12.662 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.662 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.920 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.920 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.920 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.920 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.920 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.920 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.920 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:13.853 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=4154958 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 4154958 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 4154958 ']' 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:13.853 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 [2024-07-25 20:06:23.312230] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:14.110 [2024-07-25 20:06:23.312314] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.110 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.110 [2024-07-25 20:06:23.380207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:14.110 [2024-07-25 20:06:23.472085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.110 [2024-07-25 20:06:23.472149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.110 [2024-07-25 20:06:23.472166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.110 [2024-07-25 20:06:23.472179] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.110 [2024-07-25 20:06:23.472191] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.110 [2024-07-25 20:06:23.472275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.110 [2024-07-25 20:06:23.472332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:14.110 [2024-07-25 20:06:23.472375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:14.110 [2024-07-25 20:06:23.472377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:14.367 20:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.367 ************************************ 00:36:14.367 START TEST spdk_target_abort 00:36:14.367 ************************************ 00:36:14.367 20:06:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:14.367 20:06:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:14.367 20:06:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:14.367 20:06:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.367 20:06:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.641 spdk_targetn1 00:36:17.641 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.641 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:17.641 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.641 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.641 [2024-07-25 20:06:26.482772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.641 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.641 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.642 [2024-07-25 20:06:26.514985] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:17.642 20:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.642 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.920 Initializing NVMe Controllers 00:36:20.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:20.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:20.920 Initialization complete. Launching workers. 00:36:20.920 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12542, failed: 0 00:36:20.920 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1315, failed to submit 11227 00:36:20.920 success 748, unsuccess 567, failed 0 00:36:20.920 20:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:20.920 20:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:20.920 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.199 Initializing NVMe Controllers 00:36:24.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.199 Initialization complete. Launching workers. 00:36:24.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8544, failed: 0 00:36:24.200 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1285, failed to submit 7259 00:36:24.200 success 316, unsuccess 969, failed 0 00:36:24.200 20:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.200 20:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.200 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.480 Initializing NVMe Controllers 00:36:27.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.480 Initialization complete. Launching workers. 00:36:27.480 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31401, failed: 0 00:36:27.480 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2700, failed to submit 28701 00:36:27.480 success 507, unsuccess 2193, failed 0 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.480 20:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4154958 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 4154958 ']' 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 4154958 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4154958 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4154958' 00:36:28.416 killing process with pid 4154958 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 4154958 00:36:28.416 20:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 4154958 00:36:28.674 00:36:28.674 real 0m14.375s 00:36:28.674 user 0m54.081s 00:36:28.674 sys 0m2.773s 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:28.674 ************************************ 00:36:28.674 END TEST spdk_target_abort 00:36:28.674 ************************************ 00:36:28.674 20:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:28.674 20:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:28.674 20:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:28.674 20:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:28.674 ************************************ 00:36:28.674 START TEST kernel_target_abort 00:36:28.674 ************************************ 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:28.674 20:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.630 Waiting for block devices as requested 00:36:29.889 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:29.889 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:29.889 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.147 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.147 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.147 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.406 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.406 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.406 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.406 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.406 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.664 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.664 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.664 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.922 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.922 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.922 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:31.180 No valid GPT data, bailing 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:31.180 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:31.180 00:36:31.180 Discovery Log Number of Records 2, Generation counter 2 00:36:31.180 =====Discovery Log Entry 0====== 00:36:31.180 trtype: tcp 00:36:31.180 adrfam: ipv4 00:36:31.180 subtype: current discovery subsystem 00:36:31.180 treq: not specified, sq flow control disable supported 00:36:31.180 portid: 1 00:36:31.180 trsvcid: 4420 00:36:31.180 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:31.180 traddr: 10.0.0.1 00:36:31.180 eflags: none 00:36:31.180 sectype: none 00:36:31.180 =====Discovery Log Entry 1====== 00:36:31.180 trtype: tcp 00:36:31.180 adrfam: ipv4 00:36:31.180 subtype: nvme subsystem 00:36:31.180 treq: not specified, sq flow control disable supported 00:36:31.180 portid: 1 00:36:31.180 trsvcid: 4420 00:36:31.181 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:31.181 traddr: 10.0.0.1 00:36:31.181 eflags: none 00:36:31.181 sectype: none 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.181 20:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.181 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.464 Initializing NVMe Controllers 00:36:34.464 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.464 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.464 Initialization complete. Launching workers. 00:36:34.464 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40221, failed: 0 00:36:34.464 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40221, failed to submit 0 00:36:34.464 success 0, unsuccess 40221, failed 0 00:36:34.464 20:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.464 20:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.464 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.750 Initializing NVMe Controllers 00:36:37.750 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.750 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.750 Initialization complete. Launching workers. 00:36:37.750 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75311, failed: 0 00:36:37.750 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18994, failed to submit 56317 00:36:37.750 success 0, unsuccess 18994, failed 0 00:36:37.750 20:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.750 20:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.750 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.040 Initializing NVMe Controllers 00:36:41.040 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.040 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.040 Initialization complete. Launching workers. 00:36:41.040 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73388, failed: 0 00:36:41.040 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18310, failed to submit 55078 00:36:41.040 success 0, unsuccess 18310, failed 0 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:41.040 20:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:41.977 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:41.977 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:41.977 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:41.977 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:41.977 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:41.977 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:41.977 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:41.978 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:41.978 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:41.978 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:42.917 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:42.917 00:36:42.917 real 0m14.236s 00:36:42.917 user 0m5.961s 00:36:42.917 sys 0m3.205s 00:36:42.917 20:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:42.917 20:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:42.917 ************************************ 00:36:42.917 END TEST kernel_target_abort 00:36:42.917 ************************************ 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:42.917 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:42.917 rmmod nvme_tcp 00:36:42.917 rmmod nvme_fabrics 00:36:43.175 rmmod nvme_keyring 00:36:43.175 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:43.175 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 4154958 ']' 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 4154958 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 4154958 ']' 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 4154958 00:36:43.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4154958) - No such process 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 4154958 is not found' 00:36:43.176 Process with pid 4154958 is not found 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:43.176 20:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:44.113 Waiting for block devices as requested 00:36:44.113 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:44.371 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:44.371 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:44.371 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:44.628 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:44.628 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:44.628 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:44.628 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:44.886 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:44.886 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:44.886 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:44.886 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:45.143 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:45.143 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:45.143 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:45.400 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:45.400 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:45.400 20:06:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.933 20:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:47.933 00:36:47.933 real 0m37.851s 00:36:47.933 user 1m2.075s 00:36:47.933 sys 0m9.255s 00:36:47.933 20:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:47.933 20:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.933 ************************************ 00:36:47.933 END TEST nvmf_abort_qd_sizes 00:36:47.933 ************************************ 00:36:47.933 20:06:56 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:47.933 20:06:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:47.933 20:06:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:47.933 20:06:56 -- common/autotest_common.sh@10 -- # set +x 00:36:47.933 ************************************ 00:36:47.933 START TEST keyring_file 00:36:47.933 ************************************ 00:36:47.933 20:06:56 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:47.933 * Looking for test storage... 00:36:47.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.933 20:06:56 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.933 20:06:56 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.933 20:06:56 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.933 20:06:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.933 20:06:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.933 20:06:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.933 20:06:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:47.933 20:06:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:47.933 20:06:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gDyXfk9fZ4 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:47.933 20:06:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gDyXfk9fZ4 00:36:47.933 20:06:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gDyXfk9fZ4 00:36:47.934 20:06:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gDyXfk9fZ4 00:36:47.934 20:06:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MujdbXoaah 00:36:47.934 20:06:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:47.934 20:06:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:47.934 20:06:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.934 20:06:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.934 20:06:56 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:47.934 20:06:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:47.934 20:06:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:47.934 20:06:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MujdbXoaah 00:36:47.934 20:06:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MujdbXoaah 00:36:47.934 20:06:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MujdbXoaah 00:36:47.934 20:06:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=4160771 00:36:47.934 20:06:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:47.934 20:06:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4160771 00:36:47.934 20:06:57 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 4160771 ']' 00:36:47.934 20:06:57 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.934 20:06:57 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:47.934 20:06:57 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.934 20:06:57 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:47.934 20:06:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:47.934 [2024-07-25 20:06:57.054517] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:47.934 [2024-07-25 20:06:57.054608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160771 ] 00:36:47.934 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.934 [2024-07-25 20:06:57.115974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.934 [2024-07-25 20:06:57.201162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:48.192 20:06:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.192 [2024-07-25 20:06:57.453254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.192 null0 00:36:48.192 [2024-07-25 20:06:57.485305] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:48.192 [2024-07-25 20:06:57.485801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:48.192 [2024-07-25 20:06:57.493324] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.192 20:06:57 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.192 [2024-07-25 20:06:57.505360] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:48.192 request: 00:36:48.192 { 00:36:48.192 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.192 "secure_channel": false, 00:36:48.192 "listen_address": { 00:36:48.192 "trtype": "tcp", 00:36:48.192 "traddr": "127.0.0.1", 00:36:48.192 "trsvcid": "4420" 00:36:48.192 }, 00:36:48.192 "method": "nvmf_subsystem_add_listener", 00:36:48.192 "req_id": 1 00:36:48.192 } 00:36:48.192 Got JSON-RPC error response 00:36:48.192 response: 00:36:48.192 { 00:36:48.192 "code": -32602, 00:36:48.192 "message": "Invalid parameters" 00:36:48.192 } 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:48.192 20:06:57 keyring_file -- keyring/file.sh@46 -- # bperfpid=4160775 00:36:48.192 20:06:57 keyring_file -- keyring/file.sh@48 -- # waitforlisten 4160775 /var/tmp/bperf.sock 00:36:48.192 20:06:57 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 4160775 ']' 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:48.192 20:06:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.192 [2024-07-25 20:06:57.553919] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:48.192 [2024-07-25 20:06:57.553984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160775 ] 00:36:48.192 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.192 [2024-07-25 20:06:57.611736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.449 [2024-07-25 20:06:57.700714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.449 20:06:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:48.449 20:06:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:48.449 20:06:57 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:48.449 20:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:48.707 20:06:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MujdbXoaah 00:36:48.707 20:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MujdbXoaah 00:36:48.964 20:06:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:48.964 20:06:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:48.964 20:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.964 20:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.964 20:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.222 20:06:58 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.gDyXfk9fZ4 == \/\t\m\p\/\t\m\p\.\g\D\y\X\f\k\9\f\Z\4 ]] 00:36:49.222 20:06:58 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:49.222 20:06:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:49.222 20:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.222 20:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.222 20:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.479 20:06:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MujdbXoaah == \/\t\m\p\/\t\m\p\.\M\u\j\d\b\X\o\a\a\h ]] 00:36:49.479 20:06:58 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:49.479 20:06:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.479 20:06:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.479 20:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.479 20:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.479 20:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.766 20:06:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:49.766 20:06:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:49.766 20:06:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:49.766 20:06:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.766 20:06:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.766 20:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.766 20:06:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.024 20:06:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:50.024 20:06:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.024 20:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.282 [2024-07-25 20:06:59.539153] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:50.282 nvme0n1 00:36:50.282 20:06:59 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:50.282 20:06:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.282 20:06:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.282 20:06:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.282 20:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.282 20:06:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.540 20:06:59 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:50.540 20:06:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:50.540 20:06:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.540 20:06:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.540 20:06:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.540 20:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.540 20:06:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.798 20:07:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:50.798 20:07:00 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:51.057 Running I/O for 1 seconds... 00:36:51.995 00:36:51.995 Latency(us) 00:36:51.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.995 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:51.995 nvme0n1 : 1.02 6949.06 27.14 0.00 0.00 18258.48 9126.49 27379.48 00:36:51.995 =================================================================================================================== 00:36:51.995 Total : 6949.06 27.14 0.00 0.00 18258.48 9126.49 27379.48 00:36:51.995 0 00:36:51.995 20:07:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:51.995 20:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:52.253 20:07:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:52.253 20:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.253 20:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.253 20:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.253 20:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.253 20:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.511 20:07:01 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:52.511 20:07:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:52.511 20:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:52.511 20:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.511 20:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.511 20:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.511 20:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.769 20:07:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:52.769 20:07:02 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.769 20:07:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.769 20:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.027 [2024-07-25 20:07:02.257309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:53.027 [2024-07-25 20:07:02.257885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2444730 (107): Transport endpoint is not connected 00:36:53.027 [2024-07-25 20:07:02.258873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2444730 (9): Bad file descriptor 00:36:53.027 [2024-07-25 20:07:02.259871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:53.027 [2024-07-25 20:07:02.259893] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:53.027 [2024-07-25 20:07:02.259908] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:53.027 request: 00:36:53.027 { 00:36:53.027 "name": "nvme0", 00:36:53.027 "trtype": "tcp", 00:36:53.027 "traddr": "127.0.0.1", 00:36:53.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.027 "adrfam": "ipv4", 00:36:53.027 "trsvcid": "4420", 00:36:53.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.027 "psk": "key1", 00:36:53.027 "method": "bdev_nvme_attach_controller", 00:36:53.027 "req_id": 1 00:36:53.027 } 00:36:53.027 Got JSON-RPC error response 00:36:53.027 response: 00:36:53.027 { 00:36:53.027 "code": -5, 00:36:53.027 "message": "Input/output error" 00:36:53.027 } 00:36:53.027 20:07:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:53.027 20:07:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:53.027 20:07:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:53.027 20:07:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:53.027 20:07:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:53.027 20:07:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.028 20:07:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.028 20:07:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.028 20:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.028 20:07:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.285 20:07:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:53.285 20:07:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:53.285 20:07:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.285 20:07:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.285 20:07:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.285 20:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.285 20:07:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.543 20:07:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:53.543 20:07:02 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:53.543 20:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:53.801 20:07:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:53.801 20:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:54.060 20:07:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:54.060 20:07:03 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:54.060 20:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.317 20:07:03 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:54.317 20:07:03 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.gDyXfk9fZ4 00:36:54.317 20:07:03 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.317 20:07:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:54.317 20:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:54.574 [2024-07-25 20:07:03.757930] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gDyXfk9fZ4': 0100660 00:36:54.574 [2024-07-25 20:07:03.757973] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:54.574 request: 00:36:54.574 { 00:36:54.574 "name": "key0", 00:36:54.574 "path": "/tmp/tmp.gDyXfk9fZ4", 00:36:54.574 "method": "keyring_file_add_key", 00:36:54.574 "req_id": 1 00:36:54.574 } 00:36:54.574 Got JSON-RPC error response 00:36:54.574 response: 00:36:54.574 { 00:36:54.574 "code": -1, 00:36:54.574 "message": "Operation not permitted" 00:36:54.574 } 00:36:54.574 20:07:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:54.574 20:07:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:54.574 20:07:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:54.574 20:07:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:54.574 20:07:03 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.gDyXfk9fZ4 00:36:54.574 20:07:03 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:54.574 20:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gDyXfk9fZ4 00:36:54.832 20:07:04 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.gDyXfk9fZ4 00:36:54.832 20:07:04 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:54.832 20:07:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:54.832 20:07:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.832 20:07:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.832 20:07:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.832 20:07:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.090 20:07:04 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:55.090 20:07:04 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.090 20:07:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.091 20:07:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.091 [2024-07-25 20:07:04.487920] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gDyXfk9fZ4': No such file or directory 00:36:55.091 [2024-07-25 20:07:04.487957] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:55.091 [2024-07-25 20:07:04.487989] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:55.091 [2024-07-25 20:07:04.488002] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:55.091 [2024-07-25 20:07:04.488015] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:55.091 request: 00:36:55.091 { 00:36:55.091 "name": "nvme0", 00:36:55.091 "trtype": "tcp", 00:36:55.091 "traddr": "127.0.0.1", 00:36:55.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.091 "adrfam": "ipv4", 00:36:55.091 "trsvcid": "4420", 00:36:55.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.091 "psk": "key0", 00:36:55.091 "method": "bdev_nvme_attach_controller", 00:36:55.091 "req_id": 1 00:36:55.091 } 00:36:55.091 Got JSON-RPC error response 00:36:55.091 response: 00:36:55.091 { 00:36:55.091 "code": -19, 00:36:55.091 "message": "No such device" 00:36:55.091 } 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.091 20:07:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.091 20:07:04 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:55.091 20:07:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:55.348 20:07:04 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EKo8bNfNXh 00:36:55.348 20:07:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:55.348 20:07:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:55.348 20:07:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:55.348 20:07:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:55.348 20:07:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:55.348 20:07:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:55.348 20:07:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:55.608 20:07:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EKo8bNfNXh 00:36:55.608 20:07:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EKo8bNfNXh 00:36:55.608 20:07:04 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.EKo8bNfNXh 00:36:55.608 20:07:04 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EKo8bNfNXh 00:36:55.608 20:07:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EKo8bNfNXh 00:36:55.868 20:07:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.868 20:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.127 nvme0n1 00:36:56.127 20:07:05 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:56.127 20:07:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.127 20:07:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.127 20:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.127 20:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.127 20:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.384 20:07:05 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:56.384 20:07:05 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:56.385 20:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:56.642 20:07:05 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:56.642 20:07:05 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:56.643 20:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.643 20:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.643 20:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.900 20:07:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:56.900 20:07:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:56.900 20:07:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.900 20:07:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.900 20:07:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.900 20:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.900 20:07:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.158 20:07:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:57.158 20:07:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:57.158 20:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:57.416 20:07:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:57.416 20:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.416 20:07:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:57.674 20:07:06 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:57.674 20:07:06 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EKo8bNfNXh 00:36:57.674 20:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EKo8bNfNXh 00:36:57.934 20:07:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MujdbXoaah 00:36:57.934 20:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MujdbXoaah 00:36:58.194 20:07:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.194 20:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.452 nvme0n1 00:36:58.452 20:07:07 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:58.452 20:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:58.710 20:07:07 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:58.710 "subsystems": [ 00:36:58.710 { 00:36:58.710 "subsystem": "keyring", 00:36:58.710 "config": [ 00:36:58.710 { 00:36:58.710 "method": "keyring_file_add_key", 00:36:58.710 "params": { 00:36:58.710 "name": "key0", 00:36:58.710 "path": "/tmp/tmp.EKo8bNfNXh" 00:36:58.710 } 00:36:58.710 }, 00:36:58.710 { 00:36:58.710 "method": "keyring_file_add_key", 00:36:58.710 "params": { 00:36:58.710 "name": "key1", 00:36:58.710 "path": "/tmp/tmp.MujdbXoaah" 00:36:58.710 } 00:36:58.710 } 00:36:58.710 ] 00:36:58.710 }, 00:36:58.710 { 00:36:58.710 "subsystem": "iobuf", 00:36:58.710 "config": [ 00:36:58.710 { 00:36:58.710 "method": "iobuf_set_options", 00:36:58.710 "params": { 00:36:58.710 "small_pool_count": 8192, 00:36:58.710 "large_pool_count": 1024, 00:36:58.710 "small_bufsize": 8192, 00:36:58.710 "large_bufsize": 135168 00:36:58.710 } 00:36:58.710 } 00:36:58.710 ] 00:36:58.710 }, 00:36:58.710 { 00:36:58.710 "subsystem": "sock", 00:36:58.710 "config": [ 00:36:58.710 { 00:36:58.710 "method": "sock_set_default_impl", 00:36:58.710 "params": { 00:36:58.710 "impl_name": "posix" 00:36:58.710 } 00:36:58.710 }, 00:36:58.710 { 00:36:58.710 "method": "sock_impl_set_options", 00:36:58.710 "params": { 00:36:58.711 "impl_name": "ssl", 00:36:58.711 "recv_buf_size": 4096, 00:36:58.711 "send_buf_size": 4096, 00:36:58.711 "enable_recv_pipe": true, 00:36:58.711 "enable_quickack": false, 00:36:58.711 "enable_placement_id": 0, 00:36:58.711 "enable_zerocopy_send_server": true, 00:36:58.711 "enable_zerocopy_send_client": false, 00:36:58.711 "zerocopy_threshold": 0, 00:36:58.711 "tls_version": 0, 00:36:58.711 "enable_ktls": false 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "sock_impl_set_options", 00:36:58.711 "params": { 00:36:58.711 "impl_name": "posix", 00:36:58.711 "recv_buf_size": 2097152, 00:36:58.711 "send_buf_size": 2097152, 00:36:58.711 "enable_recv_pipe": true, 00:36:58.711 "enable_quickack": false, 00:36:58.711 "enable_placement_id": 0, 00:36:58.711 "enable_zerocopy_send_server": true, 00:36:58.711 "enable_zerocopy_send_client": false, 00:36:58.711 "zerocopy_threshold": 0, 00:36:58.711 "tls_version": 0, 00:36:58.711 "enable_ktls": false 00:36:58.711 } 00:36:58.711 } 00:36:58.711 ] 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "subsystem": "vmd", 00:36:58.711 "config": [] 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "subsystem": "accel", 00:36:58.711 "config": [ 00:36:58.711 { 00:36:58.711 "method": "accel_set_options", 00:36:58.711 "params": { 00:36:58.711 "small_cache_size": 128, 00:36:58.711 "large_cache_size": 16, 00:36:58.711 "task_count": 2048, 00:36:58.711 "sequence_count": 2048, 00:36:58.711 "buf_count": 2048 00:36:58.711 } 00:36:58.711 } 00:36:58.711 ] 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "subsystem": "bdev", 00:36:58.711 "config": [ 00:36:58.711 { 00:36:58.711 "method": "bdev_set_options", 00:36:58.711 "params": { 00:36:58.711 "bdev_io_pool_size": 65535, 00:36:58.711 "bdev_io_cache_size": 256, 00:36:58.711 "bdev_auto_examine": true, 00:36:58.711 "iobuf_small_cache_size": 128, 00:36:58.711 "iobuf_large_cache_size": 16 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "bdev_raid_set_options", 00:36:58.711 "params": { 00:36:58.711 "process_window_size_kb": 1024 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "bdev_iscsi_set_options", 00:36:58.711 "params": { 00:36:58.711 "timeout_sec": 30 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "bdev_nvme_set_options", 00:36:58.711 "params": { 00:36:58.711 "action_on_timeout": "none", 00:36:58.711 "timeout_us": 0, 00:36:58.711 "timeout_admin_us": 0, 00:36:58.711 "keep_alive_timeout_ms": 10000, 00:36:58.711 "arbitration_burst": 0, 00:36:58.711 "low_priority_weight": 0, 00:36:58.711 "medium_priority_weight": 0, 00:36:58.711 "high_priority_weight": 0, 00:36:58.711 "nvme_adminq_poll_period_us": 10000, 00:36:58.711 "nvme_ioq_poll_period_us": 0, 00:36:58.711 "io_queue_requests": 512, 00:36:58.711 "delay_cmd_submit": true, 00:36:58.711 "transport_retry_count": 4, 00:36:58.711 "bdev_retry_count": 3, 00:36:58.711 "transport_ack_timeout": 0, 00:36:58.711 "ctrlr_loss_timeout_sec": 0, 00:36:58.711 "reconnect_delay_sec": 0, 00:36:58.711 "fast_io_fail_timeout_sec": 0, 00:36:58.711 "disable_auto_failback": false, 00:36:58.711 "generate_uuids": false, 00:36:58.711 "transport_tos": 0, 00:36:58.711 "nvme_error_stat": false, 00:36:58.711 "rdma_srq_size": 0, 00:36:58.711 "io_path_stat": false, 00:36:58.711 "allow_accel_sequence": false, 00:36:58.711 "rdma_max_cq_size": 0, 00:36:58.711 "rdma_cm_event_timeout_ms": 0, 00:36:58.711 "dhchap_digests": [ 00:36:58.711 "sha256", 00:36:58.711 "sha384", 00:36:58.711 "sha512" 00:36:58.711 ], 00:36:58.711 "dhchap_dhgroups": [ 00:36:58.711 "null", 00:36:58.711 "ffdhe2048", 00:36:58.711 "ffdhe3072", 00:36:58.711 "ffdhe4096", 00:36:58.711 "ffdhe6144", 00:36:58.711 "ffdhe8192" 00:36:58.711 ] 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "bdev_nvme_attach_controller", 00:36:58.711 "params": { 00:36:58.711 "name": "nvme0", 00:36:58.711 "trtype": "TCP", 00:36:58.711 "adrfam": "IPv4", 00:36:58.711 "traddr": "127.0.0.1", 00:36:58.711 "trsvcid": "4420", 00:36:58.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.711 "prchk_reftag": false, 00:36:58.711 "prchk_guard": false, 00:36:58.711 "ctrlr_loss_timeout_sec": 0, 00:36:58.711 "reconnect_delay_sec": 0, 00:36:58.711 "fast_io_fail_timeout_sec": 0, 00:36:58.711 "psk": "key0", 00:36:58.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.711 "hdgst": false, 00:36:58.711 "ddgst": false 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "bdev_nvme_set_hotplug", 00:36:58.711 "params": { 00:36:58.711 "period_us": 100000, 00:36:58.711 "enable": false 00:36:58.711 } 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "method": "bdev_wait_for_examine" 00:36:58.711 } 00:36:58.711 ] 00:36:58.711 }, 00:36:58.711 { 00:36:58.711 "subsystem": "nbd", 00:36:58.711 "config": [] 00:36:58.711 } 00:36:58.711 ] 00:36:58.711 }' 00:36:58.711 20:07:07 keyring_file -- keyring/file.sh@114 -- # killprocess 4160775 00:36:58.711 20:07:07 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 4160775 ']' 00:36:58.711 20:07:07 keyring_file -- common/autotest_common.sh@950 -- # kill -0 4160775 00:36:58.711 20:07:07 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:58.711 20:07:07 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:58.711 20:07:07 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4160775 00:36:58.711 20:07:08 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:58.711 20:07:08 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:58.711 20:07:08 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4160775' 00:36:58.711 killing process with pid 4160775 00:36:58.711 20:07:08 keyring_file -- common/autotest_common.sh@965 -- # kill 4160775 00:36:58.711 Received shutdown signal, test time was about 1.000000 seconds 00:36:58.711 00:36:58.711 Latency(us) 00:36:58.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.711 =================================================================================================================== 00:36:58.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.711 20:07:08 keyring_file -- common/autotest_common.sh@970 -- # wait 4160775 00:36:58.970 20:07:08 keyring_file -- keyring/file.sh@117 -- # bperfpid=4162234 00:36:58.970 20:07:08 keyring_file -- keyring/file.sh@119 -- # waitforlisten 4162234 /var/tmp/bperf.sock 00:36:58.970 20:07:08 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 4162234 ']' 00:36:58.970 20:07:08 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.970 20:07:08 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:58.970 20:07:08 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.970 20:07:08 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:58.970 20:07:08 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:58.970 20:07:08 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:58.970 "subsystems": [ 00:36:58.970 { 00:36:58.970 "subsystem": "keyring", 00:36:58.970 "config": [ 00:36:58.970 { 00:36:58.970 "method": "keyring_file_add_key", 00:36:58.970 "params": { 00:36:58.970 "name": "key0", 00:36:58.970 "path": "/tmp/tmp.EKo8bNfNXh" 00:36:58.970 } 00:36:58.970 }, 00:36:58.970 { 00:36:58.970 "method": "keyring_file_add_key", 00:36:58.970 "params": { 00:36:58.970 "name": "key1", 00:36:58.970 "path": "/tmp/tmp.MujdbXoaah" 00:36:58.970 } 00:36:58.970 } 00:36:58.970 ] 00:36:58.970 }, 00:36:58.970 { 00:36:58.970 "subsystem": "iobuf", 00:36:58.970 "config": [ 00:36:58.970 { 00:36:58.970 "method": "iobuf_set_options", 00:36:58.970 "params": { 00:36:58.970 "small_pool_count": 8192, 00:36:58.970 "large_pool_count": 1024, 00:36:58.970 "small_bufsize": 8192, 00:36:58.970 "large_bufsize": 135168 00:36:58.970 } 00:36:58.970 } 00:36:58.970 ] 00:36:58.970 }, 00:36:58.970 { 00:36:58.970 "subsystem": "sock", 00:36:58.970 "config": [ 00:36:58.970 { 00:36:58.970 "method": "sock_set_default_impl", 00:36:58.970 "params": { 00:36:58.970 "impl_name": "posix" 00:36:58.970 } 00:36:58.970 }, 00:36:58.970 { 00:36:58.970 "method": "sock_impl_set_options", 00:36:58.970 "params": { 00:36:58.970 "impl_name": "ssl", 00:36:58.970 "recv_buf_size": 4096, 00:36:58.970 "send_buf_size": 4096, 00:36:58.970 "enable_recv_pipe": true, 00:36:58.970 "enable_quickack": false, 00:36:58.970 "enable_placement_id": 0, 00:36:58.970 "enable_zerocopy_send_server": true, 00:36:58.970 "enable_zerocopy_send_client": false, 00:36:58.970 "zerocopy_threshold": 0, 00:36:58.970 "tls_version": 0, 00:36:58.970 "enable_ktls": false 00:36:58.970 } 00:36:58.970 }, 00:36:58.970 { 00:36:58.970 "method": "sock_impl_set_options", 00:36:58.970 "params": { 00:36:58.970 "impl_name": "posix", 00:36:58.970 "recv_buf_size": 2097152, 00:36:58.970 "send_buf_size": 2097152, 00:36:58.970 "enable_recv_pipe": true, 00:36:58.970 "enable_quickack": false, 00:36:58.970 "enable_placement_id": 0, 00:36:58.970 "enable_zerocopy_send_server": true, 00:36:58.970 "enable_zerocopy_send_client": false, 00:36:58.971 "zerocopy_threshold": 0, 00:36:58.971 "tls_version": 0, 00:36:58.971 "enable_ktls": false 00:36:58.971 } 00:36:58.971 } 00:36:58.971 ] 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "subsystem": "vmd", 00:36:58.971 "config": [] 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "subsystem": "accel", 00:36:58.971 "config": [ 00:36:58.971 { 00:36:58.971 "method": "accel_set_options", 00:36:58.971 "params": { 00:36:58.971 "small_cache_size": 128, 00:36:58.971 "large_cache_size": 16, 00:36:58.971 "task_count": 2048, 00:36:58.971 "sequence_count": 2048, 00:36:58.971 "buf_count": 2048 00:36:58.971 } 00:36:58.971 } 00:36:58.971 ] 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "subsystem": "bdev", 00:36:58.971 "config": [ 00:36:58.971 { 00:36:58.971 "method": "bdev_set_options", 00:36:58.971 "params": { 00:36:58.971 "bdev_io_pool_size": 65535, 00:36:58.971 "bdev_io_cache_size": 256, 00:36:58.971 "bdev_auto_examine": true, 00:36:58.971 "iobuf_small_cache_size": 128, 00:36:58.971 "iobuf_large_cache_size": 16 00:36:58.971 } 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "method": "bdev_raid_set_options", 00:36:58.971 "params": { 00:36:58.971 "process_window_size_kb": 1024 00:36:58.971 } 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "method": "bdev_iscsi_set_options", 00:36:58.971 "params": { 00:36:58.971 "timeout_sec": 30 00:36:58.971 } 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "method": "bdev_nvme_set_options", 00:36:58.971 "params": { 00:36:58.971 "action_on_timeout": "none", 00:36:58.971 "timeout_us": 0, 00:36:58.971 "timeout_admin_us": 0, 00:36:58.971 "keep_alive_timeout_ms": 10000, 00:36:58.971 "arbitration_burst": 0, 00:36:58.971 "low_priority_weight": 0, 00:36:58.971 "medium_priority_weight": 0, 00:36:58.971 "high_priority_weight": 0, 00:36:58.971 "nvme_adminq_poll_period_us": 10000, 00:36:58.971 "nvme_ioq_poll_period_us": 0, 00:36:58.971 "io_queue_requests": 512, 00:36:58.971 "delay_cmd_submit": true, 00:36:58.971 "transport_retry_count": 4, 00:36:58.971 "bdev_retry_count": 3, 00:36:58.971 "transport_ack_timeout": 0, 00:36:58.971 "ctrlr_loss_timeout_sec": 0, 00:36:58.971 "reconnect_delay_sec": 0, 00:36:58.971 "fast_io_fail_timeout_sec": 0, 00:36:58.971 "disable_auto_failback": false, 00:36:58.971 "generate_uuids": false, 00:36:58.971 "transport_tos": 0, 00:36:58.971 "nvme_error_stat": false, 00:36:58.971 "rdma_srq_size": 0, 00:36:58.971 "io_path_stat": false, 00:36:58.971 "allow_accel_sequence": false, 00:36:58.971 "rdma_max_cq_size": 0, 00:36:58.971 "rdma_cm_event_timeout_ms": 0, 00:36:58.971 "dhchap_digests": [ 00:36:58.971 "sha256", 00:36:58.971 "sha384", 00:36:58.971 "sha512" 00:36:58.971 ], 00:36:58.971 "dhchap_dhgroups": [ 00:36:58.971 "null", 00:36:58.971 "ffdhe2048", 00:36:58.971 "ffdhe3072", 00:36:58.971 "ffdhe4096", 00:36:58.971 "ffdhe6144", 00:36:58.971 "ffdhe8192" 00:36:58.971 ] 00:36:58.971 } 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "method": "bdev_nvme_attach_controller", 00:36:58.971 "params": { 00:36:58.971 "name": "nvme0", 00:36:58.971 "trtype": "TCP", 00:36:58.971 "adrfam": "IPv4", 00:36:58.971 "traddr": "127.0.0.1", 00:36:58.971 "trsvcid": "4420", 00:36:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.971 "prchk_reftag": false, 00:36:58.971 "prchk_guard": false, 00:36:58.971 "ctrlr_loss_timeout_sec": 0, 00:36:58.971 "reconnect_delay_sec": 0, 00:36:58.971 "fast_io_fail_timeout_sec": 0, 00:36:58.971 "psk": "key0", 00:36:58.971 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.971 "hdgst": false, 00:36:58.971 "ddgst": false 00:36:58.971 } 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "method": "bdev_nvme_set_hotplug", 00:36:58.971 "params": { 00:36:58.971 "period_us": 100000, 00:36:58.971 "enable": false 00:36:58.971 } 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "method": "bdev_wait_for_examine" 00:36:58.971 } 00:36:58.971 ] 00:36:58.971 }, 00:36:58.971 { 00:36:58.971 "subsystem": "nbd", 00:36:58.971 "config": [] 00:36:58.971 } 00:36:58.971 ] 00:36:58.971 }' 00:36:58.971 20:07:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:58.971 [2024-07-25 20:07:08.256678] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:58.971 [2024-07-25 20:07:08.256771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162234 ] 00:36:58.971 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.971 [2024-07-25 20:07:08.315578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.229 [2024-07-25 20:07:08.402557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.229 [2024-07-25 20:07:08.583397] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:59.795 20:07:09 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:59.795 20:07:09 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:59.795 20:07:09 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:59.795 20:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.795 20:07:09 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:00.052 20:07:09 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:00.052 20:07:09 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:00.052 20:07:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:00.052 20:07:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.052 20:07:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.052 20:07:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.052 20:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.308 20:07:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:00.308 20:07:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:00.308 20:07:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:00.308 20:07:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.308 20:07:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.308 20:07:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:00.308 20:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.565 20:07:09 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:00.565 20:07:09 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:00.565 20:07:09 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:00.565 20:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:00.822 20:07:10 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:00.822 20:07:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:00.822 20:07:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.EKo8bNfNXh /tmp/tmp.MujdbXoaah 00:37:00.822 20:07:10 keyring_file -- keyring/file.sh@20 -- # killprocess 4162234 00:37:00.822 20:07:10 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 4162234 ']' 00:37:00.822 20:07:10 keyring_file -- common/autotest_common.sh@950 -- # kill -0 4162234 00:37:00.822 20:07:10 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:00.822 20:07:10 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:00.822 20:07:10 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4162234 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4162234' 00:37:01.079 killing process with pid 4162234 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@965 -- # kill 4162234 00:37:01.079 Received shutdown signal, test time was about 1.000000 seconds 00:37:01.079 00:37:01.079 Latency(us) 00:37:01.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.079 =================================================================================================================== 00:37:01.079 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@970 -- # wait 4162234 00:37:01.079 20:07:10 keyring_file -- keyring/file.sh@21 -- # killprocess 4160771 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 4160771 ']' 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@950 -- # kill -0 4160771 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:01.079 20:07:10 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4160771 00:37:01.080 20:07:10 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:01.080 20:07:10 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:01.080 20:07:10 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4160771' 00:37:01.080 killing process with pid 4160771 00:37:01.080 20:07:10 keyring_file -- common/autotest_common.sh@965 -- # kill 4160771 00:37:01.080 [2024-07-25 20:07:10.498908] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:01.080 20:07:10 keyring_file -- common/autotest_common.sh@970 -- # wait 4160771 00:37:01.646 00:37:01.646 real 0m14.026s 00:37:01.646 user 0m35.070s 00:37:01.646 sys 0m3.265s 00:37:01.646 20:07:10 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:01.646 20:07:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:01.646 ************************************ 00:37:01.646 END TEST keyring_file 00:37:01.646 ************************************ 00:37:01.646 20:07:10 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:01.646 20:07:10 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:01.646 20:07:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:01.646 20:07:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:01.646 20:07:10 -- common/autotest_common.sh@10 -- # set +x 00:37:01.646 ************************************ 00:37:01.646 START TEST keyring_linux 00:37:01.646 ************************************ 00:37:01.646 20:07:10 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:01.646 * Looking for test storage... 00:37:01.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.646 20:07:10 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.646 20:07:10 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.646 20:07:10 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.646 20:07:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.646 20:07:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.646 20:07:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.646 20:07:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:01.646 20:07:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:01.646 20:07:10 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:01.646 20:07:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:01.646 20:07:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:01.647 20:07:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:01.647 20:07:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:01.647 20:07:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:01.647 20:07:10 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:01.647 20:07:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:01.647 20:07:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:01.647 /tmp/:spdk-test:key0 00:37:01.647 20:07:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:01.647 20:07:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:01.647 20:07:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:01.647 20:07:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:01.647 20:07:11 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:01.647 20:07:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:01.647 20:07:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:01.647 20:07:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:01.647 /tmp/:spdk-test:key1 00:37:01.647 20:07:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4162596 00:37:01.647 20:07:11 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:01.647 20:07:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4162596 00:37:01.647 20:07:11 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 4162596 ']' 00:37:01.647 20:07:11 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.647 20:07:11 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:01.647 20:07:11 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.647 20:07:11 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:01.647 20:07:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.906 [2024-07-25 20:07:11.105796] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:37:01.906 [2024-07-25 20:07:11.105873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162596 ] 00:37:01.906 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.906 [2024-07-25 20:07:11.167167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.906 [2024-07-25 20:07:11.256559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:02.165 20:07:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:02.165 [2024-07-25 20:07:11.517184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.165 null0 00:37:02.165 [2024-07-25 20:07:11.549200] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:02.165 [2024-07-25 20:07:11.549704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.165 20:07:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:02.165 81816613 00:37:02.165 20:07:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:02.165 590847108 00:37:02.165 20:07:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4162698 00:37:02.165 20:07:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4162698 /var/tmp/bperf.sock 00:37:02.165 20:07:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 4162698 ']' 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:02.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:02.165 20:07:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:02.423 [2024-07-25 20:07:11.616648] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:37:02.423 [2024-07-25 20:07:11.616726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162698 ] 00:37:02.423 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.423 [2024-07-25 20:07:11.680112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.423 [2024-07-25 20:07:11.771495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.423 20:07:11 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:02.423 20:07:11 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:02.423 20:07:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:02.423 20:07:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:02.680 20:07:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:02.680 20:07:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:03.245 20:07:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:03.245 20:07:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:03.245 [2024-07-25 20:07:12.616009] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:03.502 nvme0n1 00:37:03.502 20:07:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:03.502 20:07:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:03.502 20:07:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:03.502 20:07:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:03.502 20:07:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:03.502 20:07:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.759 20:07:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:03.759 20:07:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:03.759 20:07:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:03.759 20:07:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:03.759 20:07:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.759 20:07:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.759 20:07:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@25 -- # sn=81816613 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 81816613 == \8\1\8\1\6\6\1\3 ]] 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 81816613 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:04.018 20:07:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:04.018 Running I/O for 1 seconds... 00:37:04.950 00:37:04.950 Latency(us) 00:37:04.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.950 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:04.950 nvme0n1 : 1.01 7382.92 28.84 0.00 0.00 17191.07 4369.07 22816.24 00:37:04.950 =================================================================================================================== 00:37:04.950 Total : 7382.92 28.84 0.00 0.00 17191.07 4369.07 22816.24 00:37:04.950 0 00:37:04.950 20:07:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:04.950 20:07:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:05.207 20:07:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:05.207 20:07:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:05.207 20:07:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:05.207 20:07:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:05.207 20:07:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.207 20:07:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:05.464 20:07:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:05.464 20:07:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:05.464 20:07:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:05.464 20:07:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.464 20:07:14 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:05.464 20:07:14 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.464 20:07:14 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:05.464 20:07:14 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.464 20:07:14 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:05.465 20:07:14 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.465 20:07:14 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.465 20:07:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.722 [2024-07-25 20:07:15.075616] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:05.722 [2024-07-25 20:07:15.075729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b14ea0 (107): Transport endpoint is not connected 00:37:05.722 [2024-07-25 20:07:15.076722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b14ea0 (9): Bad file descriptor 00:37:05.722 [2024-07-25 20:07:15.077720] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:05.722 [2024-07-25 20:07:15.077739] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:05.722 [2024-07-25 20:07:15.077767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:05.722 request: 00:37:05.722 { 00:37:05.722 "name": "nvme0", 00:37:05.722 "trtype": "tcp", 00:37:05.722 "traddr": "127.0.0.1", 00:37:05.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.722 "adrfam": "ipv4", 00:37:05.722 "trsvcid": "4420", 00:37:05.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.722 "psk": ":spdk-test:key1", 00:37:05.722 "method": "bdev_nvme_attach_controller", 00:37:05.722 "req_id": 1 00:37:05.722 } 00:37:05.722 Got JSON-RPC error response 00:37:05.722 response: 00:37:05.722 { 00:37:05.722 "code": -5, 00:37:05.722 "message": "Input/output error" 00:37:05.722 } 00:37:05.722 20:07:15 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:05.722 20:07:15 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@33 -- # sn=81816613 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 81816613 00:37:05.723 1 links removed 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@33 -- # sn=590847108 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 590847108 00:37:05.723 1 links removed 00:37:05.723 20:07:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4162698 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 4162698 ']' 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 4162698 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4162698 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4162698' 00:37:05.723 killing process with pid 4162698 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@965 -- # kill 4162698 00:37:05.723 Received shutdown signal, test time was about 1.000000 seconds 00:37:05.723 00:37:05.723 Latency(us) 00:37:05.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.723 =================================================================================================================== 00:37:05.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:05.723 20:07:15 keyring_linux -- common/autotest_common.sh@970 -- # wait 4162698 00:37:05.980 20:07:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4162596 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 4162596 ']' 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 4162596 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4162596 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4162596' 00:37:05.980 killing process with pid 4162596 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@965 -- # kill 4162596 00:37:05.980 20:07:15 keyring_linux -- common/autotest_common.sh@970 -- # wait 4162596 00:37:06.579 00:37:06.579 real 0m4.852s 00:37:06.579 user 0m9.231s 00:37:06.579 sys 0m1.615s 00:37:06.579 20:07:15 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:06.579 20:07:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:06.579 ************************************ 00:37:06.579 END TEST keyring_linux 00:37:06.579 ************************************ 00:37:06.579 20:07:15 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:06.579 20:07:15 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:06.579 20:07:15 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:06.579 20:07:15 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:06.579 20:07:15 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:06.579 20:07:15 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:06.579 20:07:15 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:06.579 20:07:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:06.579 20:07:15 -- common/autotest_common.sh@10 -- # set +x 00:37:06.579 20:07:15 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:06.579 20:07:15 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:06.579 20:07:15 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:06.579 20:07:15 -- common/autotest_common.sh@10 -- # set +x 00:37:08.478 INFO: APP EXITING 00:37:08.478 INFO: killing all VMs 00:37:08.478 INFO: killing vhost app 00:37:08.478 INFO: EXIT DONE 00:37:09.414 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:09.414 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:09.414 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:09.414 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:09.414 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:09.414 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:09.414 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:09.414 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:09.414 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:09.414 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:09.414 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:09.414 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:09.414 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:09.414 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:09.414 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:09.672 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:09.672 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:10.605 Cleaning 00:37:10.605 Removing: /var/run/dpdk/spdk0/config 00:37:10.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:10.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:10.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:10.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:10.864 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:10.864 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:10.864 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:10.864 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:10.864 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:10.864 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:10.864 Removing: /var/run/dpdk/spdk1/config 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:10.864 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:10.864 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:10.864 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:10.864 Removing: /var/run/dpdk/spdk2/config 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:10.864 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:10.864 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:10.864 Removing: /var/run/dpdk/spdk3/config 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:10.864 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:10.864 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:10.864 Removing: /var/run/dpdk/spdk4/config 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:10.864 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:10.864 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:10.864 Removing: /dev/shm/bdev_svc_trace.1 00:37:10.864 Removing: /dev/shm/nvmf_trace.0 00:37:10.864 Removing: /dev/shm/spdk_tgt_trace.pid3842909 00:37:10.864 Removing: /var/run/dpdk/spdk0 00:37:10.864 Removing: /var/run/dpdk/spdk1 00:37:10.864 Removing: /var/run/dpdk/spdk2 00:37:10.864 Removing: /var/run/dpdk/spdk3 00:37:10.864 Removing: /var/run/dpdk/spdk4 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3841363 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3842092 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3842909 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3843346 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3844039 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3844179 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3844893 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3844904 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3845146 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3846928 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3847991 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3848306 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3848491 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3848691 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3848881 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3849042 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3849196 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3849374 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3849952 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3852301 00:37:10.864 Removing: /var/run/dpdk/spdk_pid3852467 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3852629 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3852651 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853059 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853073 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853495 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853506 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853798 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853804 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3853966 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3854054 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3854465 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3854619 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3854816 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3854986 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3855012 00:37:10.865 Removing: /var/run/dpdk/spdk_pid3855197 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3855350 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3855573 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3855784 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3855942 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3856099 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3856370 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3856532 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3856685 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3856884 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3857115 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3857277 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3857444 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3857717 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3857880 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3858032 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3858306 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3858465 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3858634 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3858785 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3859062 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3859131 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3859335 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3861383 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3914930 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3917419 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3924364 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3927533 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3929867 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3930274 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3937505 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3937507 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3938662 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3939317 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3939864 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3940257 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3940378 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3940521 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3940658 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3940660 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3941316 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3941849 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3942513 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3942911 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3942913 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3943172 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3944048 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3944770 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3950118 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3950308 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3952895 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3956473 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3958654 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3964906 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3970706 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3971899 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3972564 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3982756 00:37:11.123 Removing: /var/run/dpdk/spdk_pid3984960 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4010034 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4012822 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4014000 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4015201 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4015326 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4015466 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4015573 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4015919 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4017232 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4017950 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4018265 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4019873 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4020294 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4020736 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4023247 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4027112 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4030526 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4053628 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4056795 00:37:11.123 Removing: /var/run/dpdk/spdk_pid4060566 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4061565 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4062590 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4065137 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4067487 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4071570 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4071691 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4074449 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4074584 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4074728 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4074994 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4075005 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4076179 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4077368 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4078547 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4079725 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4080908 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4082084 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4086153 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4086839 00:37:11.124 Removing: /var/run/dpdk/spdk_pid4088124 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4088968 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4092558 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4094540 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4097940 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4101246 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4107415 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4111813 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4111816 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4124615 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4125026 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4125430 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4125882 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4126412 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4126881 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4127343 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4127751 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4130125 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4130325 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4134051 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4134227 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4135821 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4140727 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4140738 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4143629 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4145026 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4146429 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4147169 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4148567 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4149559 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4155322 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4155710 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4156101 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4157657 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4157984 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4158335 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4160771 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4160775 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4162234 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4162596 00:37:11.382 Removing: /var/run/dpdk/spdk_pid4162698 00:37:11.382 Clean 00:37:11.382 20:07:20 -- common/autotest_common.sh@1447 -- # return 0 00:37:11.382 20:07:20 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:11.382 20:07:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.382 20:07:20 -- common/autotest_common.sh@10 -- # set +x 00:37:11.382 20:07:20 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:11.382 20:07:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.382 20:07:20 -- common/autotest_common.sh@10 -- # set +x 00:37:11.382 20:07:20 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:11.382 20:07:20 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:11.382 20:07:20 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:11.382 20:07:20 -- spdk/autotest.sh@391 -- # hash lcov 00:37:11.382 20:07:20 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:11.382 20:07:20 -- spdk/autotest.sh@393 -- # hostname 00:37:11.382 20:07:20 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:11.640 geninfo: WARNING: invalid characters removed from testname! 00:37:43.697 20:07:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:43.697 20:07:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:46.219 20:07:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.496 20:07:58 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.022 20:08:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:54.542 20:08:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:57.860 20:08:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:57.860 20:08:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.860 20:08:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:57.860 20:08:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.860 20:08:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.860 20:08:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.860 20:08:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.860 20:08:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.860 20:08:06 -- paths/export.sh@5 -- $ export PATH 00:37:57.860 20:08:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.860 20:08:06 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:57.860 20:08:06 -- common/autobuild_common.sh@440 -- $ date +%s 00:37:57.860 20:08:06 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721930886.XXXXXX 00:37:57.860 20:08:06 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721930886.j81JX8 00:37:57.860 20:08:06 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:37:57.860 20:08:06 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:37:57.860 20:08:06 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:57.860 20:08:06 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:57.860 20:08:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:57.860 20:08:06 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:57.860 20:08:06 -- common/autobuild_common.sh@456 -- $ get_config_params 00:37:57.860 20:08:06 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:57.860 20:08:06 -- common/autotest_common.sh@10 -- $ set +x 00:37:57.860 20:08:06 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:57.860 20:08:06 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:37:57.860 20:08:06 -- pm/common@17 -- $ local monitor 00:37:57.860 20:08:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.860 20:08:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.860 20:08:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.860 20:08:06 -- pm/common@21 -- $ date +%s 00:37:57.860 20:08:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.860 20:08:06 -- pm/common@21 -- $ date +%s 00:37:57.860 20:08:06 -- pm/common@25 -- $ sleep 1 00:37:57.860 20:08:06 -- pm/common@21 -- $ date +%s 00:37:57.860 20:08:06 -- pm/common@21 -- $ date +%s 00:37:57.860 20:08:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721930886 00:37:57.860 20:08:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721930886 00:37:57.860 20:08:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721930886 00:37:57.860 20:08:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721930886 00:37:57.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721930886_collect-vmstat.pm.log 00:37:57.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721930886_collect-cpu-load.pm.log 00:37:57.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721930886_collect-cpu-temp.pm.log 00:37:57.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721930886_collect-bmc-pm.bmc.pm.log 00:37:58.799 20:08:07 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:37:58.799 20:08:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:58.799 20:08:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:58.799 20:08:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:58.799 20:08:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:58.799 20:08:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:58.799 20:08:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:58.799 20:08:07 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:58.799 20:08:07 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:58.799 20:08:07 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:58.799 20:08:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:58.799 20:08:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:58.799 20:08:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:58.799 20:08:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:58.799 20:08:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.799 20:08:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:58.799 20:08:07 -- pm/common@44 -- $ pid=4173958 00:37:58.799 20:08:07 -- pm/common@50 -- $ kill -TERM 4173958 00:37:58.799 20:08:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.799 20:08:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:58.799 20:08:07 -- pm/common@44 -- $ pid=4173960 00:37:58.799 20:08:07 -- pm/common@50 -- $ kill -TERM 4173960 00:37:58.799 20:08:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.799 20:08:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:58.799 20:08:07 -- pm/common@44 -- $ pid=4173961 00:37:58.799 20:08:07 -- pm/common@50 -- $ kill -TERM 4173961 00:37:58.799 20:08:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.799 20:08:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:58.799 20:08:07 -- pm/common@44 -- $ pid=4173992 00:37:58.799 20:08:07 -- pm/common@50 -- $ sudo -E kill -TERM 4173992 00:37:58.799 + [[ -n 3737257 ]] 00:37:58.799 + sudo kill 3737257 00:37:58.808 [Pipeline] } 00:37:58.828 [Pipeline] // stage 00:37:58.833 [Pipeline] } 00:37:58.851 [Pipeline] // timeout 00:37:58.856 [Pipeline] } 00:37:58.873 [Pipeline] // catchError 00:37:58.879 [Pipeline] } 00:37:58.896 [Pipeline] // wrap 00:37:58.902 [Pipeline] } 00:37:58.917 [Pipeline] // catchError 00:37:58.926 [Pipeline] stage 00:37:58.928 [Pipeline] { (Epilogue) 00:37:58.942 [Pipeline] catchError 00:37:58.944 [Pipeline] { 00:37:58.958 [Pipeline] echo 00:37:58.960 Cleanup processes 00:37:58.965 [Pipeline] sh 00:37:59.251 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.251 4174103 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:59.251 4174224 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.264 [Pipeline] sh 00:37:59.547 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.547 ++ grep -v 'sudo pgrep' 00:37:59.547 ++ awk '{print $1}' 00:37:59.547 + sudo kill -9 4174103 00:37:59.560 [Pipeline] sh 00:37:59.843 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:09.820 [Pipeline] sh 00:38:10.106 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:10.107 Artifacts sizes are good 00:38:10.123 [Pipeline] archiveArtifacts 00:38:10.130 Archiving artifacts 00:38:10.376 [Pipeline] sh 00:38:10.660 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:10.676 [Pipeline] cleanWs 00:38:10.686 [WS-CLEANUP] Deleting project workspace... 00:38:10.686 [WS-CLEANUP] Deferred wipeout is used... 00:38:10.694 [WS-CLEANUP] done 00:38:10.696 [Pipeline] } 00:38:10.717 [Pipeline] // catchError 00:38:10.730 [Pipeline] sh 00:38:11.010 + logger -p user.info -t JENKINS-CI 00:38:11.019 [Pipeline] } 00:38:11.034 [Pipeline] // stage 00:38:11.040 [Pipeline] } 00:38:11.057 [Pipeline] // node 00:38:11.063 [Pipeline] End of Pipeline 00:38:11.089 Finished: SUCCESS